Browse

You are looking at 1 - 10 of 3,923 items for :

  • Journal of Atmospheric and Oceanic Technology x
  • Refine by Access: All Content x
Clear All
John A. Kluge, Alexander V. Soloviev, Cayla W. Dean, Geoffrey K. Morrison, and Brian K. Haus

Abstract

A magnetic signature is created by secondary magnetic field fluctuations caused by the phenomenon of seawater moving in Earth’s magnetic field. A laboratory experiment was conducted at the Surge Structure Atmosphere Interaction (SUSTAIN) facility to measure the magnetic signature of surface waves using a differential method: a pair of magnetometers, separated horizontally by one-half wavelength, were placed at several locations on the outer tank walls. This technique significantly reduced the extraneous magnetic distortions that were detected simultaneously by both sensors and additionally doubled the magnetic signal of surface waves. Accelerometer measurements and local gradients were used to identify magnetic noise produced from tank vibrations. Wave parameters of 4-m-long waves with a 0.56-Hz frequency and a 0.1-m amplitude were used in this experiment. Freshwater and saltwater experiments were completed to determine the magnetic difference generated by the difference in conductivity. Tests with an empty tank were conducted to identify the noise of the facility. When the magnetic signal was put through spectral analysis, it showed the primary peak at the wave frequency (0.56 Hz) and less pronounced higher-frequency harmonics, which are caused by the nonlinearity of shallow water surface waves. The magnetic noise induced by the wavemaker and related vibrations peaked around 0.3 Hz, which was removed using filtering techniques. These results indicate that the magnetic signature produced by surface waves was an order of magnitude larger than in traditional model predictions. The discrepancy may be due to the magnetic permeability difference between water and air that is not considered in the traditional model.

Open access
Elżbieta Lasota, Martin Slavchev, Guergana Guerova, Witold Rohm, and Jan Kapłon

Abstract

Monitoring atmospheric conditions that lead to severe weather events is critical to their timely and accurate prediction and can help prevent of large economic losses. Bulgaria, located in southeastern Europe, has the highest mean number of thunderstorms and hailstorms. These events generally occur between April and September with a peak in July. In this study, both radio occultation (RO) and ground-based observations from the Global Navigation Satellite Systems (GNSS) were used to study two severe hailstorms that occurred in 2014 and 2019. In both storms, a cold upper-air pool was detected in addition to a large specific humidity anomaly between 2 and 6 km. In the hailstorm that occurred in July 2014, there was an RO temperature anomaly between 10 and 14 km as well as a positive specific humidity anomaly between 4 and 6 km. The integrated vapor transport (IVT) reanalysis from ERA5, indicated that the high specific humidity over the Mediterranean could be tracked to an atmospheric river over the North Atlantic, which was connected to a tropical cyclone. In the hailstorm that occurred in May 2019, elevated IVT values were observed before the storm. During this storm, a negative temperature anomaly peak was observed in the RO profile at 11.3 km as well as a positive specific humidity anomaly between 2 and 4.5 km. The WRF Model and the ERA5 dataset could reproduce the temperature profiles for both storms relatively well; however, they tended to underestimate specific humidity. The RO profiles were complemented by ground-based GNSS tropospheric delays with high temporal resolution. The evaluation of the WRF with ground-based GNSS tropospheric products revealed a time delay between the modeled and observed developments of both hailstorms.

Restricted access
Laur Ferris, Donglai Gong, Sophia Merrifield, and Louis St. Laurent

Abstract

Finescale strain parameterization (FSP) of turbulent kinetic energy dissipation rate has become a widely used method for observing ocean mixing, solving a coverage problem where direct turbulence measurements are absent but CTD profiles are available. This method can offer significant value, but there are limitations in its broad application to the global ocean. FSP often fails to produce reliable results in frontal zones where temperature–salinity (T/S) intrusive features contaminate the CTD strain spectrum, as well as where the aspect ratio of the internal wave spectrum is known to vary greatly with depth, as frequently occurs in the Southern Ocean. In this study we use direct turbulence measurements from Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES) and glider microstructure measurements from Autonomous Sampling of Southern Ocean Mixing (AUSSOM) to show that FSP can have large biases (compared to direct turbulence measurement) below the mixed layer when physics associated with T/S fronts are meaningfully present. We propose that the FSP methodology be modified to 1) include a density ratio (Rρ)-based data exclusion rule to avoid contamination by double diffusive instabilities in frontal zones such as the Antarctic Circumpolar Current, the Gulf Stream, and the Kuroshio, and 2) conduct (or leverage available) microstructure measurements of the depth-varying shear-to-strain ratio Rω(z) prior to performing FSP in each dynamically unique region of the global ocean.

Significance Statement

Internal waves travel through the ocean and collide, turbulently mixing the interior ocean and homogenizing its waters. In the absence of actual turbulence measurements, oceanographers count the ripples associated with these internal waves and use them estimate the amount of turbulence that will transpire from their collisions. In this paper we show that the ripples in temperature and salinity that naturally occur at sharp fronts masquerade as internal waves and trick oceanographers into thinking there is up to 100 000 000 times more turbulence than there actually is in these frontal regions.

Open access
Victor Alari, Jan-Victor Björkqvist, Valdur Kaldvee, Kristjan Mölder, Sander Rikka, Anne Kask-Korb, Kaimo Vahter, Siim Pärt, Nikon Vidjajev, and Hannes Tõnisson

Abstract

Wave buoys are a popular choice for measuring sea surface waves, and there is also an increasing interest for wave information from ice-covered water bodies. Such measurements require cost-effective, easily deployable, and robust devices. We have developed LainePoiss (LP)—an ice-resistant and lightweight wave buoy. It calculates the surface elevation by double integrating the data from the inertial sensors of the microelectromechanical system (MEMS), and transmits wave parameters and spectra in real time over cellular or satellite networks. LP was validated through 1) sensor tests, 2) wave tank experiments, 3) a field validation against a Directional Waverider, 4) an intercomparison of several buoys in the field, and 5) field measurements in the Baltic Sea marginal ice zone. These extensive field and laboratory tests confirmed that LP performed well (e.g., the bias of Hm 0 in the field was 0.01 m, with a correlation of 0.99 and a scatter index of 8%; the mean absolute deviation of mean wave direction was 7°). LP was also deployed with an unmanned aerial vehicle and we present our experience of such operations. One issue that requires further development is the presence of low-frequency artifacts caused by the dynamic noise of the gyroscope. For now, a correction method is presented to deal with the noise.

Significance Statement

Operational wave buoys are large and therefore expensive and inconvenient to deploy. Many commercially available devices cannot measure short waves and are not tested in ice. Our purpose was to develop an affordable wave buoy that is lightweight, ice resistant, capable of measuring short waves, and also has a longer operating life than existing research buoys. The buoy is easily deployed with a small boat or even an industrial drone, thus reducing operating costs. The buoy is accurate, and captures waves that are too short for operational wave buoys. This is relevant for coastal planning in, e.g., archipelagos and narrow fjords. We measured waves in ice in the Baltic Sea, and are planning to extend these measurements to Antarctica.

Open access
Alexander Ryzhkov and John Krause

Abstract

A novel polarimetric radar algorithm for melting-layer (ML) detection and determination of its height has been developed and tested for a large number of cold-season weather events. The algorithm uses radial profiles of the cross-correlation coefficient (ρ hv or CC) at the lowest elevation angles (<5°–6°). The effects of beam broadening on the spatial distribution of CC have been taken into account via theoretical simulations of the radial profiles of CC assuming intrinsic vertical profiles of polarimetric radar variables within the ML with varying heights and depths of the ML. The model radial profiles of CC and their key parameters are stored in lookup tables and compared with the measured CC profiles. The matching of the model and measured CC radial profiles allows the algorithm to determine the “true” heights of the top and bottom of the ML, Ht and Hb, at distances up to 150 km from the radar. Integrating the CC information from all available antenna elevations makes it possible to produce accurate maps of Ht and Hb over large areas of radar coverage as opposed to the previous ML detection methods including the existing algorithm implemented on the U.S. network of the WSR-88Ds. The initial version of the algorithm has been implemented in C++ and tested for a multitude of cold-season weather events characterized by a low ML with different degrees of spatial nonuniformity including cases with sharp frontal boundaries and rain–snow transitions. The new ML detection algorithm (MLDA) exhibits robust performance, demonstrating spatial and temporal continuity, and showing general consistency of the ML designations matching those obtained from the regional model and the quasi-vertical profiles (QVP) methodology output.

Restricted access
Jinbo Wang, Lee-Lueng Fu, Bruce Haines, Matthias Lankhorst, Andrew J. Lucas, J. Thomas Farrar, Uwe Send, Christian Meinig, Oscar Schofield, Richard Ray, Matthew Archer, David Aragon, Sebastien Bigorre, Yi Chao, John Kerfoot, Robert Pinkel, David Sandwell, and Scott Stalin

Abstract

The future Surface Water and Ocean Topography (SWOT) mission aims to map sea surface height (SSH) in wide swaths with an unprecedented spatial resolution and subcentimeter accuracy. The instrument performance needs to be verified using independent measurements in a process known as calibration and validation (Cal/Val). The SWOT Cal/Val needs in situ measurements that can make synoptic observations of SSH field over an O(100) km distance with an accuracy matching the SWOT requirements specified in terms of the along-track wavenumber spectrum of SSH error. No existing in situ observing system has been demonstrated to meet this challenge. A field campaign was conducted during September 2019–January 2020 to assess the potential of various instruments and platforms to meet the SWOT Cal/Val requirement. These instruments include two GPS buoys, two bottom pressure recorders (BPR), three moorings with fixed conductivity–temperature–depth (CTD) and CTD profilers, and a glider. The observations demonstrated that 1) the SSH (hydrostatic) equation can be closed with 1–3 cm RMS residual using BPR, CTD mooring and GPS SSH, and 2) using the upper-ocean steric height derived from CTD moorings enable subcentimeter accuracy in the California Current region during the 2019/20 winter. Given that the three moorings are separated at 10–20–30 km distance, the observations provide valuable information about the small-scale SSH variability associated with the ocean circulation at frequencies ranging from hourly to monthly in the region. The combined analysis sheds light on the design of the SWOT mission postlaunch Cal/Val field campaign.

Restricted access
David S. Trossman and Robert H. Tyler

Abstract

To overcome challenges with observing ocean heat content (OHC) over the entire ocean, we propose a novel approach that exploits the abundance of satellite data, including data from modern satellite geomagnetic surveys such as Swarm. The method considers a novel combination of conventional in situ (temperature and pressure) as well as satellite (altimetry and gravimetry) data with estimates of ocean electrical conductance (depth-integrated conductivity), which can potentially be obtained from magnetic observations (by satellite, land, seafloor, ocean, and airborne magnetometers). To demonstrate the potential benefit of the proposed method, we sample model output of an ocean state estimate to reflect existing observations and train a machine learning algorithm [Generalized Additive Model (GAM)] on these samples. We then calculate OHC everywhere using information potentially derivable from various global satellite coverage—including magnetic observations—to gauge the GAM’s goodness of fit on a global scale. Inclusion of in situ observations of OHC in the upper 2000 m from Argo-like floats and conductance data each reduce the root-mean-square error by an order of magnitude. Retraining the GAM with recent ship-based hydrographic data attains a smaller RMSE in polar oceans than training the GAM only once on all available historical ship-based hydrographic data; the opposite is true elsewhere. The GAM more accurately calculates OHC anomalies throughout the water column than below 2000 m and can detect global OHC anomalies over multiyear time scales, even when considering hypothetical measurement errors. Our method could complement existing methods and its accuracy could be improved through careful ship-based campaign planning.

Significance Statement

The purpose of this manuscript is to demonstrate the potential for practical implementation of a remote monitoring method for ocean heat content (OHC) anomalies. To do this, we sample data from a reanalysis product primarily because of the dearth of observations below 2000 m depth that can be used for validation and the fact that full-depth-integrated electrical seawater conductivity data products derived from satellite magnetometry are not yet available. We evaluate multiple factors related to the accuracy of OHC anomaly estimation and find that, even with hypothetical measurement errors, our method can be used to monitor OHC anomalies on multiyear time scales.

Restricted access
Min Deng, Zhien Wang, Rainer Volkamer, Jefferson R. Snider, Larry Oolman, David M. Plummer, Natalie Kille, Kyle J. Zarzana, Christopher F. Lee, Teresa Campos, Nicholas Ryan Mahon, Brent Glover, Matthew D. Burkhart, and Austin Morgan

Abstract

During the summer of 2018, the upward-pointing Wyoming Cloud Lidar (WCL) was deployed on board the University of Wyoming King Air (UWKA) research aircraft for the Biomass Burning Flux Measurements of Trace Gases and Aerosols (BB-FLUX) field campaign. This paper describes the generation of calibrated attenuated backscatter coefficients and aerosol extinction coefficients from the WCL measurements. The retrieved aerosol extinction coefficients at the flight level strongly correlate (correlation coefficient, rr > 0.8) with in situ aerosol concentration and carbon monoxide (CO) concentration, providing a first-order estimate for converting WCL extinction coefficients into vertically resolved CO and aerosol concentration within wildfire smoke plumes. The integrated CO column concentrations from the WCL data in nonextinguished profiles also correlate (rr = 0.7) with column measurements by the University of Colorado Airborne Solar Occultation Flux instrument, indicating the validity of WCL-derived extinction coefficients. During BB-FLUX, the UWKA sampled smoke plumes from more than 20 wildfires during 35 flights over the western United States. Seventy percent of flight time was spent below 3 km above ground level (AGL) altitude, although the UWKA ascended up to 6 km AGL to sample the top of some deep smoke plumes. The upward-pointing WCL observed a nearly equal amount of thin and dense smoke below 2 km and above 5 km due to the flight purpose of targeted fresh fire smoke. Between 2 and 5 km, where most of the wildfire smoke resided, the WCL observed slightly more thin smoke than dense smoke due to smoke spreading. Extinction coefficients in dense smoke were 2–10 times stronger, and dense smoke tended to have larger depolarization ratio, associated with irregular aerosol particles.

Restricted access
Min Deng, Rainer M. Volkamer, Zhien Wang, Jefferson R. Snider, Natalie Kille, and Leidy J. Romero-Alvarez

Abstract

The western U.S. wildfire smoke plumes observed by the upward-pointing Wyoming Cloud Lidar (WCL) during the Biomass Burning Fluxes of Trace Gases and Aerosols (BB-FLUX) project are investigated in a two-part paper. Part II here presents the reconstructed vertical structures of seven plumes from airborne WCL measurements. The vertical structures evident in the fire plume cross sections, supported by in situ measurements, showed that the fire plumes had distinct macrophysical and microphysical properties, which are closely related to the plume transport, fire emission intensity, and thermodynamic structure in the boundary layer. All plumes had an injection layer between 2.8 and 4.0 km above mean sea level, which is generally below the identified boundary layer top height. Plumes that transported upward out of the boundary layer, such as the Rabbit Foot and Pole Creek fires, formed a higher plume at around 5.5 km. The largest and highest Pole Creek fire plume was transported farthest and was sampled by University of Wyoming King Air aircraft at 170 km, or 2.3 h, downwind. It was associated with the warmest, driest, deepest boundary layer and the highest wind speed and turbulence. The Watson Creek fire plume intensified in the afternoon with stronger CO emission and larger smoke plume height than in the morning, indicating a fire diurnal cycle, but some fire plumes did not intensify in the afternoon. There were pockets of relatively large irregular aerosol particles at the tops of plumes from active fires. In less-active fire plumes, the WCL depolarization ratio and passive cavity aerosol spectrometer probe mass mean diameter maximized at a height that was low in the plume.

Restricted access
Steven R. Jayne, W. Brechner Owens, Pelle E. Robbins, Alexander K. Ekholm, Neil M. Bogue, and Elizabeth R. Sanabia

Abstract

The Air-Launched Autonomous Micro Observer (ALAMO) is a versatile profiling float that can be launched from an aircraft to make temperature and salinity observations of the upper ocean for over a year with high temporal sampling. Similar in dimensions and weight to an airborne expendable bathythermograph (AXBT), but with the same capability as Argo profiling floats, ALAMOs can be deployed from an A-sized (sonobuoy) launch tube, the stern ramp of a cargo plane, or the door of a small aircraft. Unlike an AXBT, however, the ALAMO float directly measures pressure, can incorporate additional sensors, and is capable of performing hundreds of ocean profiles compared to the single temperature profile provided by an AXBT. Upon deployment, the float parachutes to the ocean, releases the air-deployment package, and immediately begins profiling. Ocean profile data along with position and engineering information are transmitted via the Iridium satellite network, automatically processed, and then distributed by the Global Telecommunications System for use by the operational forecasting community. The ALAMO profiling mission can be modified using the two-way Iridium communications to change the profiling frequency and depth. Example observations are included to demonstrate the ALAMO’s utility.

Open access