Browse

You are looking at 1 - 10 of 3,999 items for :

  • Journal of Atmospheric and Oceanic Technology x
  • Refine by Access: Content accessible to me x
Clear All
Cameron Bertossa
,
Tristan L’Ecuyer
,
Aronne Merrelli
,
Xianglei Huang
, and
Xiuhong Chen

Abstract

The Polar Radiant Energy in the Far Infrared Experiment (PREFIRE) will fill a gap in our understanding of polar processes and the polar climate by offering widespread, spectrally resolved measurements through the far-infrared (FIR) with two identical CubeSat spacecraft. While the polar regions are typically difficult for skillful cloud identification due to cold surface temperatures, the reflection by bright surfaces, and frequent temperature inversions, the inclusion of the FIR may offer increased spectral sensitivity, allowing for the detection of even thin ice clouds. This study assesses the potential skill, as well as limitations, of a neural network (NN)-based cloud mask using simulated spectra mimicking what the PREFIRE mission will capture. Analysis focuses on the polar regions. Clouds are found to be detected approximately 90% of time using the derived neural network. The NN’s assigned confidence for whether a scene is “clear” or “cloudy” proves to be a skillful way in which quality flags can be attached to predictions. Clouds with higher cloud-top heights are typically more easily detected. Low-altitude clouds over polar surfaces, which are the most difficult for the NN to detect, are still detected over 80% of the time. The FIR portion of the spectrum is found to increase the detection of clear scenes and increase mid- to high-altitude cloud detection. Cloud detection skill improves through the use of the overlapping fields of view produced by the PREFIRE instrument’s sampling strategy. Overlapping fields of view increase accuracy relative to the baseline NN while simultaneously predicting on a sub-FOV scale.

Significance Statement

Clouds play an important role in defining the Arctic and Antarctic climates. The purpose of this study is to explore the potential of never-before systematically measured radiative properties of the atmosphere to aid in the detection of polar clouds, which are traditionally difficult to detect. Satellite measurements of emitted radiation at wavelengths longer than 15 μm, combined with complex machine learning methods, may allow us to better understand the occurrence of various cloud types at both poles. The occurrence of these clouds can determine whether the surface warms or cools, influencing surface temperatures and the rate at which ice melts or refreezes. Understanding the frequencies of these various clouds is increasingly important within the context of our rapidly changing climate.

Open access
Kaiyun Lv
,
Weifeng Yang
,
Zhiping Chen
,
Pengfei Xia
,
Xiaoxing He
,
Zhigao Chen
, and
Tieding Lu

Abstract

Zenith hydrostatic delay (ZHD) is a crucial parameter in Global Navigation Satellite System (GNSS) navigation and positioning and GNSS meteorology. Since the Saastamoinen ZHD model has a larger error in China, it is significant to improve the Saastamoinen ZHD model. This work first estimated the Saastamoinen model using the integrated ZHD as reference values obtained from radiosonde data collected at 73 stations in China from 2012 to 2016. Then, the residuals between the reference values and the Saastamoinen modeled ZHDs were calculated, and the correlations between the residuals and meteorological parameters were explored. The continuous wavelet transform method was used to recognize the annual and semiannual characteristics of the residuals. Because of the nonlinear variation characteristics of residuals, the nonlinear least squares estimation method was introduced to establish an improved ZHD model—China Revised Zenith Hydrostatic Delay (CRZHD)—adapted for China. The accuracy of the CRZHD model was assessed using radiosonde data and International GNSS Service (IGS) data in 2017; the radiosonde data results show that the CRZHD model is superior to the Saastamoinen model with a 69.6% improvement. The three IGS stations with continuous meteorological data present that the BIAS and RMSE are decreased by 2.7 and 1.5 (URUM), 5.9 and 5.3 (BJFS), and 9.6 and 8.8 mm (TCMS), respectively. The performance of the CRZHD model retrieving PWV was discussed using radiosonde data in 2017. It is shown that the CRZHD model retrieving PWV (CRZHD-PWV) outperforms the Saastamoinen model (SAAS-PWV), in which the precision is improved by 44.4%. The BIAS ranged from −1 to 1 mm and RMSE ranged from 0 to 2 mm in CRZHD-PWV account for 89.0% and 95.9%, while SAAS-PWV account for 46.6% and 58.9%.

Significance Statement

Zenith hydrostatic delay (ZHD) is one of the most important parameters in Global Navigation Satellite System (GNSS) navigation and positioning and GNSS meteorology, which can be derived from a precise ZHD model due to its stability. This research established an improved ZHD model for China to obtain accurate ZHD, which is a prerequisite for pinpoint precipitable water vapor (PWV) retrieval. And the PWV value is beneficial to analyze the change in precipitation in some regions, forecast the short-term rainfall, and monitor the climate.

Restricted access
N. V. Zilberman
,
M. Scanderbeg
,
A. R. Gray
, and
P. R. Oke

Abstract

Global estimates of absolute velocities can be derived from Argo float trajectories during drift at parking depth. A new velocity dataset developed and maintained at Scripps Institution of Oceanography is presented based on all Core, Biogeochemical, and Deep Argo float trajectories collected between 2001 and 2020. Discrepancies between velocity estimates from the Scripps dataset and other existing products including YoMaHa and ANDRO are associated with quality control criteria, as well as selected parking depth and cycle time. In the Scripps product, over 1.3 million velocity estimates are used to reconstruct a time-mean velocity field for the 800–1200 dbar layer at 1° horizontal resolution. This dataset provides a benchmark to evaluate the veracity of the BRAN2020 reanalysis in representing the observed variability of absolute velocities and offers a compelling opportunity for improved characterization and representation in forecast and reanalysis systems.

Significance Statement

The aim of this study is to provide observation-based estimates of the large-scale, subsurface ocean circulation. We exploit the drift of autonomous profiling floats to carefully isolate the inferred circulation at the parking depth, and combine observations from over 11 000 floats, sampling between 2001 and 2020, to deliver a new dataset with unprecedented accuracy. The new estimates of subsurface currents are suitable for assessing global models, reanalyses, and forecasts, and for constraining ocean circulation in data-assimilating models.

Restricted access
Bernadette M. Sloyan
,
Christopher C. Chapman
,
Rebecca Cowley
, and
Anastase A. Charantonis

Abstract

In situ observations are vital to improving our understanding of the variability and dynamics of the ocean. A critical component of the ocean circulation is the strong, narrow, and highly variable western boundary currents. Ocean moorings that extend from the seafloor to the surface remain the most effective and efficient method to fully observe these currents. For various reasons, mooring instruments may not provide continuous records. Here we assess the application of the Iterative Completion Self-Organizing Maps (ITCOMPSOM) machine learning technique to fill observational data gaps in a 7.5 yr time series of the East Australian Current. The method was validated by withholding parts of fully known profiles, and reconstructing them. For 20% random withholding of known velocity data, validation statistics of the u- and υ-velocity components are R 2 coefficients of 0.70 and 0.88 and root-mean-square errors of 0.038 and 0.064 m s−1, respectively. Withholding 100 days of known velocity profiles over a depth range between 60 and 700 m has mean profile residual differences between true and predicted u and υ velocity of 0.009 and 0.02 m s−1, respectively. The ITCOMPSOM also reproduces the known velocity variability. For 20% withholding of salinity and temperature data, root-mean-square errors of 0.04 and 0.38°C, respectively, are obtained. The ITCOMPSOM validation statistics are significantly better than those obtained when standard data filling methods are used. We suggest that machine learning techniques can be an appropriate method to fill missing data and enable production of observational-derived data products.

Significance Statement

Moored observational time series of ocean boundary currents monitor the full-depth variability and change of these dynamic currents and are used to understand their influence on large-scale ocean climate, regional shelf–coastal processes, extreme weather, and seasonal climate. In this study we apply a machine learning technique, Iterative Completion Self-Organizing Maps (ITCOMPSOM), to fill data gaps in a boundary current moored observational data record. The ITCOMPSOM provides an improved method to fill data gaps in the mooring record and if applied to other observational data records may improve the reconstruction of missing data. The derived gridded data product should improve the accessibility and potentially increase the use of these data.

Open access
Andre Amador
,
Sophia T. Merrifield
, and
Eric J. Terrill

Abstract

The present work details the measurement capabilities of Wave Glider autonomous surface vehicles (ASVs) for research-grade meteorology, wave, and current data. Methodologies for motion compensation are described and tested, including a correction technique to account for Doppler shifting of the wave signal. Wave Glider measurements are evaluated against observations obtained from World Meteorological Organization (WMO)-compliant moored buoy assets located off the coast of Southern California. The validation spans a range of field conditions and includes multiple deployments to assess the quality of vehicle-based observations. Results indicate that Wave Gliders can accurately measure wave spectral information, bulk wave parameters, water velocities, bulk winds, and other atmospheric variables with the application of appropriate motion compensation techniques. Measurement errors were found to be comparable to those from reference moored buoys and within WMO operational requirements. The findings of this study represent a step toward enabling the use of ASV-based data for the calibration and validation of remote observations and assimilation into forecast models.

Restricted access
Jared W. Marquis
,
Erica K. Dolinar
,
Anne Garnier
,
James R. Campbell
,
Benjamin C. Ruston
,
Ping Yang
, and
Jianglong Zhang

Abstract

The assimilation of hyperspectral infrared sounders (HIS) observations aboard Earth-observing satellites has become vital to numerical weather prediction, yet this assimilation is predicated on the assumption of clear-sky observations. Using collocated assimilated observations from the Atmospheric Infrared Sounder (AIRS) and the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP), it is found that nearly 7.7% of HIS observations assimilated by the Naval Research Laboratory Variational Data Assimilation System–Accelerated Representer (NAVDAS-AR) are contaminated by cirrus clouds. These contaminating clouds primarily exhibit visible cloud optical depths at 532 nm (COD532nm) below 0.10 and cloud-top temperatures between 240 and 185 K as expected for cirrus clouds. These contamination statistics are consistent with simulations from the Radiative Transfer for TOVS (RTTOV) model showing a cirrus cloud with a COD532nm of 0.10 imparts brightness temperature differences below typical innovation thresholds used by NAVDAS-AR. Using a one-dimensional variational (1DVar) assimilation system coupled with RTTOV for forward and gradient radiative transfer, the analysis temperature and moisture impact of assimilating cirrus-contaminated HIS observations is estimated. Large differences of 2.5 K in temperature and 11 K in dewpoint are possible for a cloud with COD532nm of 0.10 and cloud-top temperature of 210 K. When normalized by the contamination statistics, global differences of nearly 0.11 K in temperature and 0.34 K in dewpoint are possible, with temperature and dewpoint tropospheric root-mean-squared errors (RMSDs) as large as 0.06 and 0.11 K, respectively. While in isolation these global estimates are not particularly concerning, differences are likely much larger in regions with high cirrus frequency.

Open access
Duncan C. Wheeler
and
Sarah N. Giddings

Abstract

This manuscript presents several improvements to methods for despiking and measuring turbulent dissipation values with acoustic Doppler velocimeters (ADVs). This includes an improved inertial subrange fitting algorithm relevant for all experimental conditions as well as other modifications designed to address failures of existing methods in the presence of large infragravity (IG) frequency bores and other intermittent, nonlinear processes. We provide a modified despiking algorithm, wavenumber spectrum calculation algorithm, and inertial subrange fitting algorithm that together produce reliable dissipation measurements in the presence of IG frequency bores, representing turbulence over a 30 min interval. We use a semi-idealized model to show that our spectrum calculation approach works substantially better than existing wave correction equations that rely on Gaussian-based velocity distributions. We also find that our inertial subrange fitting algorithm provides more robust results than existing approaches that rely on identifying a single best fit and that this improvement is independent of environmental conditions. Finally, we perform a detailed error analysis to assist in future use of these algorithms and identify areas that need careful consideration. This error analysis uses error distribution widths to find, with 95% confidence, an average systematic uncertainty of ±15.2% and statistical uncertainty of ±7.8% for our final dissipation measurements. In addition, we find that small changes to ADV despiking approaches can lead to large uncertainties in turbulent dissipation and that further work is needed to ensure more reliable despiking algorithms.

Significance Statement

Turbulent mixing is a process where the random movement of water can lead to water with different properties irreversibly mixing. This process is important to understand in estuaries because the extent of mixing of freshwater and saltwater inside an estuary alters its overall circulation and thus affects ecosystem health and the distribution of pollution or larvae in an estuary, among other things. Existing approaches to measuring turbulent dissipation, an important parameter for evaluating turbulent mixing, make assumptions that fail in the presence of certain processes, such as long-period, breaking waves in shallow estuaries. We evaluate and improve data analysis techniques to account for such processes and accurately measure turbulent dissipation in shallow estuaries. Some of our improvements are also relevant to a broad array of coastal and oceanic conditions.

Restricted access
Steven M. Martinaitis
,
Scott Lincoln
,
David Schlotzhauer
,
Stephen B. Cocks
, and
Jian Zhang

Abstract

There are multiple reasons as to why a precipitation gauge would report erroneous observations. Systematic errors relating to the measuring apparatus or resulting from observational limitations due to environmental factors (e.g., wind-induced undercatch or wetting losses) can be quantified and potentially corrected within a gauge dataset. Other challenges can arise from instrumentation malfunctions, such as clogging, poor siting, and software issues. Instrumentation malfunctions are challenging to quantify as most gauge quality control (QC) schemes focus on the current observation and not on whether the gauge has an inherent issue that would likely require maintenance of the gauge. This study focuses on the development of a temporal QC scheme to identify the likelihood of an instrumentation malfunction through the examination of hourly gauge observations and associated QC designations. The analyzed gauge performance resulted in a temporal QC classification using one of three categories: GOOD, SUSP, and BAD. The temporal QC scheme also accounts for and provides an additional designation when a significant percentage of gauge observations and associated hourly QC were influenced by meteorological factors (e.g., the inability to properly measure winter precipitation). Findings showed a consistent percentage of gauges that were classified as BAD through the running 7-day (2.9%) and 30-day (4.4%) analyses. Verification of select gauges demonstrated how the temporal QC algorithm captured different forms of instrumental-based systematic errors that influenced gauge observations. Results from this study can benefit the identification of degraded performance at gauge sites prior to scheduled routine maintenance.

Significance Statement

This study proposes a scheme that quality controls rain gauges based on its performance over a running history of hourly observational data and quality control flags to identify gauges that likely have an instrumentation malfunction. Findings from this study show the potential of identifying gauges that are impacted by issues such as clogging, software errors, and poor gauge siting. This study also highlights the challenges of distinguishing between erroneous gauge observations based on an instrumentation malfunction versus erroneous observations that were the result of an environmental factor that influence the gauge observation or its quality control classification, such as winter precipitation or virga.

Restricted access
Konstantin G. Rubinshtein
and
Inna M. Gubenko

Abstract

The article compares four lightning detection networks, provides a brief overview of lightning observation data assimilation in numerical weather forecasts, and describes and illustrates the used procedure of lightning location and time assimilation in numerical weather forecasting. Evaluations of absolute errors in temperatures of air at 2 m, humidity at 2 m, air pressure near the surface, wind speed at 10 m, and precipitation are provided for 10 forecasts made in 2020 for days on which intensive thunderstorms were observed in the Krasnodar region of Russia. It has been found that average errors for the forecast area for 24, 48, and 72 h of the forecast decreased for all parameters when assimilation of observed lightning data is used for forecasting. It has been shown that the predicted precipitation field configuration and intensity became closer to references for both areas where thunderstorms were observed and the areas where no thunderstorms occurred.

Restricted access
Katrina S. Virts
and
William J. Koshak

Abstract

Performance assessments of the Geostationary Lightning Mapper (GLM) are conducted via comparisons with independent observations from both satellite-based sensors and ground-based lightning detection (reference) networks. A key limitation of this evaluation is that the performance of the reference networks is both imperfect and imperfectly known, such that the true performance of GLM can only be estimated. Key GLM performance metrics such as detection efficiency (DE) and false alarm rate (FAR) retrieved through comparison with reference networks are affected by those networks’ own DE, FAR, and spatiotemporal accuracy, as well as the flash matching criteria applied in the analysis. This study presents a Monte Carlo simulation–based inversion technique that is used to quantify how accurately the reference networks can assess GLM performance, as well as suggest the optimal matching criteria for estimating GLM performance. This is accomplished by running simulations that clarify the specific effect of reference network quality (i.e., DE, FAR, spatiotemporal accuracy, and the geographical patterns of these attributes) on the retrieved GLM performance metrics. Baseline reference network statistics are derived from the Earth Networks Global Lightning Network (ENGLN) and the Global Lightning Dataset (GLD360). Geographic simulations indicate that the retrieved GLM DE is underestimated, with absolute errors ranging from 11% to 32%, while the retrieved GLM FAR is overestimated, with absolute errors of approximately 16% to 44%. GLM performance is most severely underestimated in the South Pacific. These results help quantify and bound the actual performance of GLM and the attendant uncertainties when comparing GLM to imperfect reference networks.

Open access