Browse

You are looking at 1 - 10 of 3,996 items for :

  • Journal of Atmospheric and Oceanic Technology x
  • Refine by Access: Content accessible to me x
Clear All
Bernadette M. Sloyan
,
Christopher C. Chapman
,
Rebecca Cowley
, and
Anastase A. Charantonis

Abstract

In situ observations are vital to improving our understanding of the variability and dynamics of the ocean. A critical component of the ocean circulation is the strong, narrow, and highly variable western boundary currents. Ocean moorings that extend from the seafloor to the surface remain the most effective and efficient method to fully observe these currents. For various reasons, mooring instruments may not provide continuous records. Here we assess the application of the Iterative Completion Self-Organizing Maps (ITCOMPSOM) machine learning technique to fill observational data gaps in a 7.5 yr time series of the East Australian Current. The method was validated by withholding parts of fully known profiles, and reconstructing them. For 20% random withholding of known velocity data, validation statistics of the u- and υ-velocity components are R 2 coefficients of 0.70 and 0.88 and root-mean-square errors of 0.038 and 0.064 m s−1, respectively. Withholding 100 days of known velocity profiles over a depth range between 60 and 700 m has mean profile residual differences between true and predicted u and υ velocity of 0.009 and 0.02 m s−1, respectively. The ITCOMPSOM also reproduces the known velocity variability. For 20% withholding of salinity and temperature data, root-mean-square errors of 0.04 and 0.38°C, respectively, are obtained. The ITCOMPSOM validation statistics are significantly better than those obtained when standard data filling methods are used. We suggest that machine learning techniques can be an appropriate method to fill missing data and enable production of observational-derived data products.

Significance Statement

Moored observational time series of ocean boundary currents monitor the full-depth variability and change of these dynamic currents and are used to understand their influence on large-scale ocean climate, regional shelf–coastal processes, extreme weather, and seasonal climate. In this study we apply a machine learning technique, Iterative Completion Self-Organizing Maps (ITCOMPSOM), to fill data gaps in a boundary current moored observational data record. The ITCOMPSOM provides an improved method to fill data gaps in the mooring record and if applied to other observational data records may improve the reconstruction of missing data. The derived gridded data product should improve the accessibility and potentially increase the use of these data.

Open access
Andre Amador
,
Sophia T. Merrifield
, and
Eric J. Terrill

Abstract

The present work details the measurement capabilities of Wave Glider autonomous surface vehicles (ASVs) for research-grade meteorology, wave, and current data. Methodologies for motion compensation are described and tested, including a correction technique to account for Doppler shifting of the wave signal. Wave Glider measurements are evaluated against observations obtained from World Meteorological Organization (WMO)-compliant moored buoy assets located off the coast of Southern California. The validation spans a range of field conditions and includes multiple deployments to assess the quality of vehicle-based observations. Results indicate that Wave Gliders can accurately measure wave spectral information, bulk wave parameters, water velocities, bulk winds, and other atmospheric variables with the application of appropriate motion compensation techniques. Measurement errors were found to be comparable to those from reference moored buoys and within WMO operational requirements. The findings of this study represent a step toward enabling the use of ASV-based data for the calibration and validation of remote observations and assimilation into forecast models.

Restricted access
Jared W. Marquis
,
Erica K. Dolinar
,
Anne Garnier
,
James R. Campbell
,
Benjamin C. Ruston
,
Ping Yang
, and
Jianglong Zhang

Abstract

The assimilation of hyperspectral infrared sounders (HIS) observations aboard Earth-observing satellites has become vital to numerical weather prediction, yet this assimilation is predicated on the assumption of clear-sky observations. Using collocated assimilated observations from the Atmospheric Infrared Sounder (AIRS) and the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP), it is found that nearly 7.7% of HIS observations assimilated by the Naval Research Laboratory Variational Data Assimilation System–Accelerated Representer (NAVDAS-AR) are contaminated by cirrus clouds. These contaminating clouds primarily exhibit visible cloud optical depths at 532 nm (COD532nm) below 0.10 and cloud-top temperatures between 240 and 185 K as expected for cirrus clouds. These contamination statistics are consistent with simulations from the Radiative Transfer for TOVS (RTTOV) model showing a cirrus cloud with a COD532nm of 0.10 imparts brightness temperature differences below typical innovation thresholds used by NAVDAS-AR. Using a one-dimensional variational (1DVar) assimilation system coupled with RTTOV for forward and gradient radiative transfer, the analysis temperature and moisture impact of assimilating cirrus-contaminated HIS observations is estimated. Large differences of 2.5 K in temperature and 11 K in dewpoint are possible for a cloud with COD532nm of 0.10 and cloud-top temperature of 210 K. When normalized by the contamination statistics, global differences of nearly 0.11 K in temperature and 0.34 K in dewpoint are possible, with temperature and dewpoint tropospheric root-mean-squared errors (RMSDs) as large as 0.06 and 0.11 K, respectively. While in isolation these global estimates are not particularly concerning, differences are likely much larger in regions with high cirrus frequency.

Open access
Duncan C. Wheeler
and
Sarah N. Giddings

Abstract

This manuscript presents several improvements to methods for despiking and measuring turbulent dissipation values with acoustic Doppler velocimeters (ADVs). This includes an improved inertial subrange fitting algorithm relevant for all experimental conditions as well as other modifications designed to address failures of existing methods in the presence of large infragravity (IG) frequency bores and other intermittent, nonlinear processes. We provide a modified despiking algorithm, wavenumber spectrum calculation algorithm, and inertial subrange fitting algorithm that together produce reliable dissipation measurements in the presence of IG frequency bores, representing turbulence over a 30 min interval. We use a semi-idealized model to show that our spectrum calculation approach works substantially better than existing wave correction equations that rely on Gaussian-based velocity distributions. We also find that our inertial subrange fitting algorithm provides more robust results than existing approaches that rely on identifying a single best fit and that this improvement is independent of environmental conditions. Finally, we perform a detailed error analysis to assist in future use of these algorithms and identify areas that need careful consideration. This error analysis uses error distribution widths to find, with 95% confidence, an average systematic uncertainty of ±15.2% and statistical uncertainty of ±7.8% for our final dissipation measurements. In addition, we find that small changes to ADV despiking approaches can lead to large uncertainties in turbulent dissipation and that further work is needed to ensure more reliable despiking algorithms.

Significance Statement

Turbulent mixing is a process where the random movement of water can lead to water with different properties irreversibly mixing. This process is important to understand in estuaries because the extent of mixing of freshwater and saltwater inside an estuary alters its overall circulation and thus affects ecosystem health and the distribution of pollution or larvae in an estuary, among other things. Existing approaches to measuring turbulent dissipation, an important parameter for evaluating turbulent mixing, make assumptions that fail in the presence of certain processes, such as long-period, breaking waves in shallow estuaries. We evaluate and improve data analysis techniques to account for such processes and accurately measure turbulent dissipation in shallow estuaries. Some of our improvements are also relevant to a broad array of coastal and oceanic conditions.

Restricted access
Steven M. Martinaitis
,
Scott Lincoln
,
David Schlotzhauer
,
Stephen B. Cocks
, and
Jian Zhang

Abstract

There are multiple reasons as to why a precipitation gauge would report erroneous observations. Systematic errors relating to the measuring apparatus or resulting from observational limitations due to environmental factors (e.g., wind-induced undercatch or wetting losses) can be quantified and potentially corrected within a gauge dataset. Other challenges can arise from instrumentation malfunctions, such as clogging, poor siting, and software issues. Instrumentation malfunctions are challenging to quantify as most gauge quality control (QC) schemes focus on the current observation and not on whether the gauge has an inherent issue that would likely require maintenance of the gauge. This study focuses on the development of a temporal QC scheme to identify the likelihood of an instrumentation malfunction through the examination of hourly gauge observations and associated QC designations. The analyzed gauge performance resulted in a temporal QC classification using one of three categories: GOOD, SUSP, and BAD. The temporal QC scheme also accounts for and provides an additional designation when a significant percentage of gauge observations and associated hourly QC were influenced by meteorological factors (e.g., the inability to properly measure winter precipitation). Findings showed a consistent percentage of gauges that were classified as BAD through the running 7-day (2.9%) and 30-day (4.4%) analyses. Verification of select gauges demonstrated how the temporal QC algorithm captured different forms of instrumental-based systematic errors that influenced gauge observations. Results from this study can benefit the identification of degraded performance at gauge sites prior to scheduled routine maintenance.

Significance Statement

This study proposes a scheme that quality controls rain gauges based on its performance over a running history of hourly observational data and quality control flags to identify gauges that likely have an instrumentation malfunction. Findings from this study show the potential of identifying gauges that are impacted by issues such as clogging, software errors, and poor gauge siting. This study also highlights the challenges of distinguishing between erroneous gauge observations based on an instrumentation malfunction versus erroneous observations that were the result of an environmental factor that influence the gauge observation or its quality control classification, such as winter precipitation or virga.

Restricted access
Konstantin G. Rubinshtein
and
Inna M. Gubenko

Abstract

The article compares four lightning detection networks, provides a brief overview of lightning observation data assimilation in numerical weather forecasts, and describes and illustrates the used procedure of lightning location and time assimilation in numerical weather forecasting. Evaluations of absolute errors in temperatures of air at 2 m, humidity at 2 m, air pressure near the surface, wind speed at 10 m, and precipitation are provided for 10 forecasts made in 2020 for days on which intensive thunderstorms were observed in the Krasnodar region of Russia. It has been found that average errors for the forecast area for 24, 48, and 72 h of the forecast decreased for all parameters when assimilation of observed lightning data is used for forecasting. It has been shown that the predicted precipitation field configuration and intensity became closer to references for both areas where thunderstorms were observed and the areas where no thunderstorms occurred.

Restricted access
Katrina S. Virts
and
William J. Koshak

Abstract

Performance assessments of the Geostationary Lightning Mapper (GLM) are conducted via comparisons with independent observations from both satellite-based sensors and ground-based lightning detection (reference) networks. A key limitation of this evaluation is that the performance of the reference networks is both imperfect and imperfectly known, such that the true performance of GLM can only be estimated. Key GLM performance metrics such as detection efficiency (DE) and false alarm rate (FAR) retrieved through comparison with reference networks are affected by those networks’ own DE, FAR, and spatiotemporal accuracy, as well as the flash matching criteria applied in the analysis. This study presents a Monte Carlo simulation–based inversion technique that is used to quantify how accurately the reference networks can assess GLM performance, as well as suggest the optimal matching criteria for estimating GLM performance. This is accomplished by running simulations that clarify the specific effect of reference network quality (i.e., DE, FAR, spatiotemporal accuracy, and the geographical patterns of these attributes) on the retrieved GLM performance metrics. Baseline reference network statistics are derived from the Earth Networks Global Lightning Network (ENGLN) and the Global Lightning Dataset (GLD360). Geographic simulations indicate that the retrieved GLM DE is underestimated, with absolute errors ranging from 11% to 32%, while the retrieved GLM FAR is overestimated, with absolute errors of approximately 16% to 44%. GLM performance is most severely underestimated in the South Pacific. These results help quantify and bound the actual performance of GLM and the attendant uncertainties when comparing GLM to imperfect reference networks.

Open access
Ibrahim Ibrahim
,
Gregory A. Kopp
, and
David M. L. Sills

Abstract

The current study develops a variant of the VAD method to retrieve thunderstorm peak event velocities using low-elevation WSR-88D radar scans. The main challenge pertains to the localized nature of thunderstorm winds, which complicates single-Doppler retrievals as it dictates the use of a limited spatial scale. Since VAD methods assume constant velocity in the fitted section, it is important that retrieved sections do not contain background flow. Accordingly, the current study proposes an image processing method to partition scans into regions, representing events and the background flows, that can be retrieved independently. The study compares the retrieved peak velocities to retrievals using another VAD method. The proposed technique is found to estimate peak event velocities that are closer to measured ASOS readings, making it more suitable for historical analysis. The study also compares the results of retrievals from over 2600 thunderstorm events from 19 radar–ASOS station combinations that are less than 10 km away from the radar. Comparisons of probability distributions of peak event velocities for ASOS readings and radar retrievals showed good agreement for stations within 4 km from the radar while more distant stations had a higher bias toward retrieved velocities compared to ASOS velocities. The mean absolute error for velocity magnitude increases with height ranging between 1.5 and 4.5 m s−1. A proposed correction based on the exponential trend of mean errors was shown to improve the probability distribution comparisons, especially for higher velocity magnitudes.

Open access
Douglas Cahl
,
George Voulgaris
, and
Lynn Leonard

Abstract

We assess the performance of three different algorithms for estimating surface ocean currents from two linear array HF radar systems. The delay-and-sum beamforming algorithm, commonly used with beamforming systems, is compared with two direction-finding algorithms: Multiple Signal Classification (MUSIC) and direction finding using beamforming (Beamscan). A 7-month dataset from two HF radar sites (CSW and GTN) on Long Bay, South Carolina (United States), is used to compare the different methods. The comparison is carried out on three locations (midpoint along the baseline and two locations with in situ Eulerian current data available) representing different steering angles. Beamforming produces surface current data that show high correlation near the radar boresight (R2 ≥ 0.79). At partially sheltered locations far from the radar boresight directions (59° and 48° for radar sites CSW and GTN, respectively) there is no correlation for CSW (R2 = 0) and the correlation is reduced significantly for GTN (R2 = 0.29). Beamscan performs similarly near the radar boresight (R2 = 0.8 and 0.85 for CSW and GTN, respectively) but better than beamforming far from the radar boresight (R2 = 0.52 and 0.32 for CSW and GTN, respectively). MUSIC’s performance, after significant tuning, is similar near the boresight (R2 = 0.78 and 0.84 for CSW and GTN) while worse than Beamscan but better than beamforming far from the boresight (R2 = 0.42 and 0.27 for CSW and GTN, respectively). Comparisons at the midpoint (baseline comparison) show the largest performance difference between methods. Beamforming (R2 = 0.01) is the worst performer, followed by MUSIC (R2 = 0.37) while Beamscan (R2 = 0.76) performs best.

Restricted access
Xiaobo Wu
,
Guijun Han
,
Wei Li
,
Qi Shao
, and
Lige Cao

Abstract

Variation of the Kuroshio path south of Japan has an important impact on weather, climate, and ecosystems due to its distinct features. Motivated by the ever-popular deep learning methods using neural network architectures in areas where more accurate reference data for oceanographic observations and reanalysis are available, we build four deep learning models based on the long short-term memory (LSTM) neural network, combined with the empirical orthogonal function (EOF) and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), namely, the LSTM, EOF–LSTM, CEEMDAN–LSTM, and EOF–CEEMDAN–LSTM. Using these models, we conduct long-range predictions (120 days) of the Kuroshio path south of Japan based on 50-yr ocean reanalysis and nearly 15 years of satellite altimeter data. We show that the EOF–CEEMDAN–LSTM performs the best among the four models, by attaining approximately 0.739 anomaly correlation coefficient and 0.399° root-mean-square error for the 120-day prediction of the Kuroshio path south of Japan. The hindcasts of the EOF–CEEMDAN–LSTM are successful in reproducing the observed formation and decay of the Kuroshio large meander during 2004/05, and the formation of the latest large meander in 2017. Finally, we present predictions of the Kuroshio path south of Japan at 120-day lead time, which suggest that the Kuroshio will remain in the state of the large meander until November 2022.

Restricted access