Browse

You are looking at 41 - 50 of 5,275 items for :

  • Journal of Atmospheric and Oceanic Technology x
  • Refine by Access: All Content x
Clear All
Yuanli Fang
,
Yiping Wu
, and
Haocai Huang

Abstract

The research on deep-sea hydrothermal fluids, cold springs, and other bottom water bodies has important implications for ecosystems. But the deep-sea environment is very harsh, and many existing sampling devices cannot meet the requirements in terms of sampling purity and gas preservation capabilities. Many current samplers are basically arranged in a vertical manner, which means that a set of trigger devices need to be installed at the entrance and exit of the sampling channel, which consumes a lot of space. Taking the flowthrough deep-seawater sequence sampling mechanism as the research object, we show a horizontal flowthrough water sampler. Through numerical simulation and experimental research on the displacement mechanism of the target sample and prefilled pure water, the displacement efficiencies under different flow velocities and sampling cavity shapes were obtained. The results confirmed that the positions of the inlet and outlet and the shapes of the sampling cavity have little influence on the displacement efficiencies at high flow rates. However, installing the inlet below the sampling cavity and installing the outlet above the sampling cavity can significantly reduce the blind area of displacement. Setting a small inclination angle to the capsule sampling cavity helps to improve the displacement effect at low flow rates. This design and research results not only simplified the complicated trigger mechanism of the traditional vertical water samplers, but also provided a reference for the operation modes of the samplers under different sample conditions.

Restricted access
Ryan C. Scott
,
Fred G. Rose
,
Paul W. Stackhouse Jr.
,
Norman G. Loeb
,
Seiji Kato
,
David R. Doelling
,
David A. Rutan
,
Patrick C. Taylor
, and
William L. Smith Jr.

Abstract

Satellite observations from Clouds and the Earth’s Radiant Energy System (CERES) radiometers have produced over two decades of world-class data documenting time–space variations in Earth’s top-of-atmosphere (TOA) radiation budget. In addition to energy exchanges among Earth and space, climate studies require accurate information on radiant energy exchanges at the surface and within the atmosphere. The CERES Cloud Radiative Swath (CRS) data product extends the standard Single Scanner Footprint (SSF) data product by calculating a suite of radiative fluxes from the surface to TOA at the instantaneous CERES footprint scale using the NASA Langley Fu–Liou radiative transfer model. Here, we describe the CRS flux algorithm and evaluate its performance against a network of ground-based measurements and CERES TOA observations. CRS all-sky downwelling broadband fluxes show significant improvements in surface validation statistics relative to the parameterized fluxes on the SSF product, including a ∼30%–40% (∼20%) reduction in SW↓ (LW↓) root-mean-square error (RMSΔ), improved correlation coefficients, and the lowest SW↓ bias over most surface types. RMSΔ and correlation statistics improve over five different surface types under both overcast and clear-sky conditions. The global mean computed TOA outgoing LW radiation (OLR) remains within <1% (2–3 W m−2) of CERES observations, while the global mean reflected SW radiation (RSW) is excessive by ∼3.5% (∼9 W m−2) owing to cloudy-sky computation errors. As we highlight using data from two remote field campaigns, the CRS data product provides many benefits for studies requiring advanced surface radiative fluxes.

Restricted access
A. Addison Alford
,
Michael I. Biggerstaff
,
Conrad L. Ziegler
,
David P. Jorgensen
, and
Gordon D. Carrie

Abstract

Mobile weather radars at high frequencies (C, X, K, and W bands) often collect data using staggered pulse repetition time (PRT) or dual pulse repetition frequency (PRF) modes to extend the effective Nyquist velocity and mitigate velocity aliasing while maintaining a useful maximum unambiguous range. These processing modes produce widely dispersed “processor” dealiasing errors in radial velocity estimates. The errors can also occur in clusters in high shear areas. Removing these errors prior to quantitative analysis requires tedious manual editing and often produces “holes” or regions of missing data in high signal-to-noise areas. Here, data from three mobile weather radars were used to show that the staggered PRT errors are related to a summation of the two Nyquist velocities associated with each of the PRTs. Using observations taken during a mature mesoscale convective system, a landfalling tropical cyclone, and a tornadic supercell storm, an algorithm to automatically identify and correct staggered PRT processor errors has been developed and tested. The algorithm creates a smooth profile of Doppler velocities using a Savitzky–Golay filter independently in radius and azimuth and then combined. Errors are easily identified by comparing the velocity at each range gate to its smoothed counterpart and corrected based on specific error characteristics. The method improves past dual PRF correction methods that were less successful at correcting “grouped” errors. Given the success of the technique across low, moderate, and high radial shear regimes, the new method should improve research radar analyses by affording the ability to retain as much data as possible rather than manually or objectively removing erroneous velocities.

Restricted access
Duncan C. Wheeler
and
Sarah N. Giddings

Abstract

This manuscript presents several improvements to methods for despiking and measuring turbulent dissipation values with Acoustic Doppler Velocitmeters (ADVs). This includes an improved inertial sub-range fitting algorithm relevant for all experimental conditions as well as other modifications designed to address failures of existing methods in the presence of large infra-gravity (IG) frequency bores and other intermittent, nonlinear processes. We provide a modified despiking algorithm, wavenumber spectrum calculation algorithm, and inertial sub-range fitting algorithm that together produce reliable dissipation measurements in the presence of IG frequency bores, representing turbulence over a 30 minute interval. We use a semi-idealized model to show that our spectrum calculation approach works substantially better than existing wave correction equations that rely on Gaussian based velocity distributions. We also find that our inertial sub-range fitting algorithm provides more robust results than existing approaches that rely on identifying a single best fit and that this improvement is independent of environmental conditions. Finally, we perform a detailed error analysis to assist in future use of these algorithms and identify areas that need careful consideration. This error analysis uses error distribution widths to find, with 95% confidence, an average systematic uncertainty of ±15.2% and statistical uncertainty of ±7.8% for our final dissipation measurements. In addition, we find that small changes to ADV despiking approaches can lead to large uncertainties in turbulent dissipation and that further work is needed to ensure more reliable despiking algorithms.

Restricted access
Douglas Cahl
,
George Voulgaris
, and
Lynn Leonard

Abstract

We assess the performance of three different algorithms for estimating surface ocean currents from two linear array HF radar systems. The delay-and-sum beamforming algorithm, commonly used with beamforming systems, is compared with two direction finding algorithms, MUltiple Signal Classification (MUSIC) and direction finding using beamforming (Beamscan). A 7-month data set from two HF radar sites (CSW and GTN) on Long Bay, SC (USA) is used to compare the different methods. The comparison is carried out on three locations (mid-point along the baseline and two locations with in situ Eulerian current data available) representing different steering angles. Beamforming produces surface current data that show high correlation near the radar boresight (R 2 ≥ 0.79). At partially sheltered locations far from the radar boresight directions (59° and 48° for radar sites CSW and GTN, respectively) there is no correlation for CSW (R 2 = 0) and the correlation is reduced significantly for GTN (R 2 = 0.29). Beamscan performs similarly near the radar boresight (R 2 = 0.8 and 0.85 for CSW and GTN, respectively) but better than beamforming far from the radar boresight (R 2 = 0.52 and 0.32 for CSW and GTN, respectively). MUSIC’s performance, after significant tuning, is similar near the boresight (R 2 = 0.78 and 0.84 for CSW and GTN) while worse than Beamscan but better than beamforming far from the boresight (R 2 = 0.42 and 0.27 for CSW and GTN, respectively). Comparisons at the mid-point (baseline comparison) show the largest performance difference between methods. Beamforming (R 2 = 0.01) is the worst performer, followed by MUSIC (R 2 = 0.37) while Beamscan (R 2 = 0.76) performs best.

Restricted access
Yukio Kurihara

Abstract

Stripe noise is a common issue in Sea Surface Temperatures (SSTs) retrieved from thermal infrared data obtained by satellite-based multi-detector radiometers. We developed a Bi-Spectral Filter (BSF) to reduce the stripe noise. The BSF is a Gaussian filter and an optimal estimation method for the differences between the data obtained at the split window. A kernel function based on the physical processes of radiative transfer has made it possible to reduce stripe and random noise in retrieved SSTs without degrading the spatial resolution or generating bias. The Second-Generation Global Imager (SGLI) is an optical sensor onboard the Global Change Observation Mission-Climate (GCOM-C) satellite. We applied the BSF to SGLI data and validated the retrieved SSTs. The validation results demonstrate the effectiveness of BSF, which reduced stripe noise in the retrieved SGLI SSTs without blurring SST fronts. It also improved the accuracy of the SSTs by about 0.04 K (about 13 %) in the robust standard deviation.

Restricted access
Werner E. Cook
and
J. Scott Greene

Abstract

Daily rainfall accumulation estimates have been derived from 1-min volume data collected via self-syphon rain gauges deployed in the Tropical Atmosphere–Ocean (TAO) array of oceanographic buoys. The underlying high-resolution volume data were obtained directly from the Global Tropical Moored Buoy Array (GTMBA) Project Office of NOAA/Pacific Marine Environmental Laboratory. The derived accumulations have been incorporated into the Pacific Rainfall (PACRAIN) database as estimated daily values to augment existing sea level oceanic rainfall records gathered using traditional rain gauges. They have also been included in the PACRAIN historical, monthly gridded rainfall product. The methodology presented, which employs differencing of least squares–regressed sensor levels about 0000 UTC and rain gauge syphon events, is shown to offer improved error characteristics over the methodology used to compute previously published GTMBA rain rates. In particular, the PACRAIN method yields larger coefficients of determination and smaller standard errors than the duplicated GTMBA method when applied to synthetic rainfall data with noise magnitude and decorrelation times encompassing those observed in the real 1-min data. These results are shown to be consistent with mathematical expectations. Sources of instrument and catchment errors, as well as evaporation, are discussed in the context of their potential effects on accumulation estimates for periods of a day or longer.

Significance Statement

In this paper, we describe the derivation of daily rainfall amounts from raw rain gauge data obtained from buoy-mounted rain gauges. These new accumulation estimates expand the store of rainfall estimates from locations approximating the open-ocean conditions of the tropical Pacific Ocean. The derivation technique we describe exhibits better performance than the method used to generate previously published estimates using the same dataset.

Restricted access
Raphael Dussin

Abstract

A novel method to adjust the precipitation produced by atmospheric reanalyses using observational constraints to force ocean models is described. The method allows the preservation of the qualities of the high-resolution and high-frequency output from the reanalyses while eliminating their bias and spurious trends. The method is shown to be robust to degradation in both space and time of the observation dataset. This method is applied to the ERA-Interim precipitation dataset using the Global Precipitation Climatology Project (GPCP) v2.3 as the observational reference in order to create a debiased dataset that can be used to force ocean models. The produced debiased dataset is then compared to ERA-Interim and GPCP in a suite of forced ice–ocean numerical experiments using the GFDL OM4 model. Ocean states obtained with the new precipitation dataset are consistent with results from GPCP-forced experiments with respect to global metrics but produces the extra sea surface salinity variability at the time scales unresolved by the observation-based dataset. Discrepancies between modeled and observed freshwater fluxes are discussed as well as the strategies to mitigate them and their impacts.

Restricted access
Caroline Comby
,
Stéphanie Barrillon
,
Jean-Luc Fuda
,
Andrea M. Doglioli
,
Roxane Tzortzis
,
Gérald Grégori
,
Melilotus Thyssen
, and
Anne A. Petrenko

Abstract

Vertical velocities knowledge is essential to study fine-scale dynamics in the surface layers of the ocean and to understand their impact on biological production mechanisms. However, these vertical velocities have long been neglected, simply parameterized, or considered as not measurable, due mainly to their order of magnitude (less than mm s−1 up to cm s−1), generally much lower than the one of the horizontal velocities (cm s−1 to dm s−1), hence the challenge of their in situ measurement. In this paper, we present an upgraded method for direct in situ measurement of vertical velocities using data from different acoustic Doppler current profilers (ADCPs) associated with CTD probes, and we perform a comparative analysis of the results obtained by this method. The analyzed data were collected during the FUMSECK cruise, from three ADCPs: two Workhorse (conventional ADCPs), one lowered on a carousel and the other deployed in free-fall mode, and one Sentinel V (a new-generation ADCP with four classical beams and a fifth vertical beam), also lowered on a carousel. Our analyses provide profiles of vertical velocities on the order of mm s−1, as expected, with standard deviations of a few mm s−1. While the fifth beam of the Sentinel V exhibits a better accuracy than conventional ADCPs, the free-fall technique provides a more accurate measurement compared to the carousel technique. Finally, this innovative study opens up the possibility to perform simple and direct in situ measurements of vertical velocities, coupling the free-fall technique with a five-beam ADCP.

Restricted access
Michael Dixon
and
Ulrike Romatschke

Abstract

The Echo Classification from COnvectivity (ECCO) algorithm identifies convective and stratiform types of radar echo in three dimensions. It is based on the calculation of reflectivity texture—a combination of the intensity and the heterogeneity of the radar echoes on each horizontal plane in a 3D Cartesian volume. Reflectivity texture is translated into convectivity, which is designed to be a quantitative measure of the convective nature of each 3D radar grid point. It ranges from 0 (100% stratiform) to 1 (100% convective). By thresholding convectivity, a more traditional qualitative categorization is obtained, which classifies radar echoes as convective, mixed, or stratiform. In contrast to previous algorithms, these echo-type classifications are provided on the full 3D grid of the reflectivity field. The vertically resolved classifications, in combination with temperature data, allow for subclassifications into shallow, mid-, deep, and elevated convective features, and low, mid-, and high stratiform regions—again in three dimensions. The algorithm was validated using datasets collected over the U.S. Great Plains during the PECAN field campaign. An analysis of lightning counts shows ∼90% of lightning occurring in regions classified as convective by ECCO. A statistical comparison of ECCO echo types with the well-established GPM radar precipitation-type categories show 84% (88%) of GPM stratiform (convective) echo being classified as stratiform (convective) or mixed by ECCO. ECCO was applied to radar grids for the continental United States, the United Arab Emirates, Australia, and Europe, illustrating its robustness and adaptability to different radar grid characteristics and climatic regions.

Open access