Browse
Abstract
The present work details the measurement capabilities of Wave Glider autonomous surface vehicles (ASVs) for research-grade meteorology, wave, and current data. Methodologies for motion compensation are described and tested, including a correction technique to account for Doppler shifting of the wave signal. Wave Glider measurements are evaluated against observations obtained from World Meteorological Organization (WMO)-compliant moored buoy assets located off the coast of Southern California. The validation spans a range of field conditions and includes multiple deployments to assess the quality of vehicle-based observations. Results indicate that Wave Gliders can accurately measure wave spectral information, bulk wave parameters, water velocities, bulk winds, and other atmospheric variables with the application of appropriate motion compensation techniques. Measurement errors were found to be comparable to those from reference moored buoys and within WMO operational requirements. The findings of this study represent a step toward enabling the use of ASV-based data for the calibration and validation of remote observations and assimilation into forecast models.
Abstract
The present work details the measurement capabilities of Wave Glider autonomous surface vehicles (ASVs) for research-grade meteorology, wave, and current data. Methodologies for motion compensation are described and tested, including a correction technique to account for Doppler shifting of the wave signal. Wave Glider measurements are evaluated against observations obtained from World Meteorological Organization (WMO)-compliant moored buoy assets located off the coast of Southern California. The validation spans a range of field conditions and includes multiple deployments to assess the quality of vehicle-based observations. Results indicate that Wave Gliders can accurately measure wave spectral information, bulk wave parameters, water velocities, bulk winds, and other atmospheric variables with the application of appropriate motion compensation techniques. Measurement errors were found to be comparable to those from reference moored buoys and within WMO operational requirements. The findings of this study represent a step toward enabling the use of ASV-based data for the calibration and validation of remote observations and assimilation into forecast models.
Abstract
The assimilation of hyperspectral infrared sounders (HIS) observations aboard Earth-observing satellites has become vital to numerical weather prediction, yet this assimilation is predicated on the assumption of clear-sky observations. Using collocated assimilated observations from the Atmospheric Infrared Sounder (AIRS) and the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP), it is found that nearly 7.7% of HIS observations assimilated by the Naval Research Laboratory Variational Data Assimilation System–Accelerated Representer (NAVDAS-AR) are contaminated by cirrus clouds. These contaminating clouds primarily exhibit visible cloud optical depths at 532 nm (COD532nm) below 0.10 and cloud-top temperatures between 240 and 185 K as expected for cirrus clouds. These contamination statistics are consistent with simulations from the Radiative Transfer for TOVS (RTTOV) model showing a cirrus cloud with a COD532nm of 0.10 imparts brightness temperature differences below typical innovation thresholds used by NAVDAS-AR. Using a one-dimensional variational (1DVar) assimilation system coupled with RTTOV for forward and gradient radiative transfer, the analysis temperature and moisture impact of assimilating cirrus-contaminated HIS observations is estimated. Large differences of 2.5 K in temperature and 11 K in dewpoint are possible for a cloud with COD532nm of 0.10 and cloud-top temperature of 210 K. When normalized by the contamination statistics, global differences of nearly 0.11 K in temperature and 0.34 K in dewpoint are possible, with temperature and dewpoint tropospheric root-mean-squared errors (RMSDs) as large as 0.06 and 0.11 K, respectively. While in isolation these global estimates are not particularly concerning, differences are likely much larger in regions with high cirrus frequency.
Abstract
The assimilation of hyperspectral infrared sounders (HIS) observations aboard Earth-observing satellites has become vital to numerical weather prediction, yet this assimilation is predicated on the assumption of clear-sky observations. Using collocated assimilated observations from the Atmospheric Infrared Sounder (AIRS) and the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP), it is found that nearly 7.7% of HIS observations assimilated by the Naval Research Laboratory Variational Data Assimilation System–Accelerated Representer (NAVDAS-AR) are contaminated by cirrus clouds. These contaminating clouds primarily exhibit visible cloud optical depths at 532 nm (COD532nm) below 0.10 and cloud-top temperatures between 240 and 185 K as expected for cirrus clouds. These contamination statistics are consistent with simulations from the Radiative Transfer for TOVS (RTTOV) model showing a cirrus cloud with a COD532nm of 0.10 imparts brightness temperature differences below typical innovation thresholds used by NAVDAS-AR. Using a one-dimensional variational (1DVar) assimilation system coupled with RTTOV for forward and gradient radiative transfer, the analysis temperature and moisture impact of assimilating cirrus-contaminated HIS observations is estimated. Large differences of 2.5 K in temperature and 11 K in dewpoint are possible for a cloud with COD532nm of 0.10 and cloud-top temperature of 210 K. When normalized by the contamination statistics, global differences of nearly 0.11 K in temperature and 0.34 K in dewpoint are possible, with temperature and dewpoint tropospheric root-mean-squared errors (RMSDs) as large as 0.06 and 0.11 K, respectively. While in isolation these global estimates are not particularly concerning, differences are likely much larger in regions with high cirrus frequency.
Abstract
This manuscript presents several improvements to methods for despiking and measuring turbulent dissipation values with acoustic Doppler velocimeters (ADVs). This includes an improved inertial subrange fitting algorithm relevant for all experimental conditions as well as other modifications designed to address failures of existing methods in the presence of large infragravity (IG) frequency bores and other intermittent, nonlinear processes. We provide a modified despiking algorithm, wavenumber spectrum calculation algorithm, and inertial subrange fitting algorithm that together produce reliable dissipation measurements in the presence of IG frequency bores, representing turbulence over a 30 min interval. We use a semi-idealized model to show that our spectrum calculation approach works substantially better than existing wave correction equations that rely on Gaussian-based velocity distributions. We also find that our inertial subrange fitting algorithm provides more robust results than existing approaches that rely on identifying a single best fit and that this improvement is independent of environmental conditions. Finally, we perform a detailed error analysis to assist in future use of these algorithms and identify areas that need careful consideration. This error analysis uses error distribution widths to find, with 95% confidence, an average systematic uncertainty of ±15.2% and statistical uncertainty of ±7.8% for our final dissipation measurements. In addition, we find that small changes to ADV despiking approaches can lead to large uncertainties in turbulent dissipation and that further work is needed to ensure more reliable despiking algorithms.
Significance Statement
Turbulent mixing is a process where the random movement of water can lead to water with different properties irreversibly mixing. This process is important to understand in estuaries because the extent of mixing of freshwater and saltwater inside an estuary alters its overall circulation and thus affects ecosystem health and the distribution of pollution or larvae in an estuary, among other things. Existing approaches to measuring turbulent dissipation, an important parameter for evaluating turbulent mixing, make assumptions that fail in the presence of certain processes, such as long-period, breaking waves in shallow estuaries. We evaluate and improve data analysis techniques to account for such processes and accurately measure turbulent dissipation in shallow estuaries. Some of our improvements are also relevant to a broad array of coastal and oceanic conditions.
Abstract
This manuscript presents several improvements to methods for despiking and measuring turbulent dissipation values with acoustic Doppler velocimeters (ADVs). This includes an improved inertial subrange fitting algorithm relevant for all experimental conditions as well as other modifications designed to address failures of existing methods in the presence of large infragravity (IG) frequency bores and other intermittent, nonlinear processes. We provide a modified despiking algorithm, wavenumber spectrum calculation algorithm, and inertial subrange fitting algorithm that together produce reliable dissipation measurements in the presence of IG frequency bores, representing turbulence over a 30 min interval. We use a semi-idealized model to show that our spectrum calculation approach works substantially better than existing wave correction equations that rely on Gaussian-based velocity distributions. We also find that our inertial subrange fitting algorithm provides more robust results than existing approaches that rely on identifying a single best fit and that this improvement is independent of environmental conditions. Finally, we perform a detailed error analysis to assist in future use of these algorithms and identify areas that need careful consideration. This error analysis uses error distribution widths to find, with 95% confidence, an average systematic uncertainty of ±15.2% and statistical uncertainty of ±7.8% for our final dissipation measurements. In addition, we find that small changes to ADV despiking approaches can lead to large uncertainties in turbulent dissipation and that further work is needed to ensure more reliable despiking algorithms.
Significance Statement
Turbulent mixing is a process where the random movement of water can lead to water with different properties irreversibly mixing. This process is important to understand in estuaries because the extent of mixing of freshwater and saltwater inside an estuary alters its overall circulation and thus affects ecosystem health and the distribution of pollution or larvae in an estuary, among other things. Existing approaches to measuring turbulent dissipation, an important parameter for evaluating turbulent mixing, make assumptions that fail in the presence of certain processes, such as long-period, breaking waves in shallow estuaries. We evaluate and improve data analysis techniques to account for such processes and accurately measure turbulent dissipation in shallow estuaries. Some of our improvements are also relevant to a broad array of coastal and oceanic conditions.
Abstract
While numerical models have been developed for several years, some of these have been applied to ocean state sampling. Adaptive sampling deploys limited assets using prior information; then, observation assets are concentrated in areas of greater sampling value, which is very suitable for an extensive and dynamic marine environment. The improved resolution allows numerical models to be used on mobile platforms. However, the existing adaptive sampling framework for mobile platforms lacks regular interaction with the numerical model. And the observation scheme is easy to deviate from the optimal. This study sets up a closed-loop adaptive sampling framework for mobile platforms that realizes the optimization of model → sampling → model. Linking coupled model with the sampling points of the mobile platforms, the adaptive method configures key sampling locations to determine when and where the sampling schemes are adjusted. With the aid of a coupled model, we selected an optimization algorithm for the framework and simulated the process under the twin experimental framework. This research provides theoretical technical support for the combination of model and mobile sampling platforms.
Abstract
While numerical models have been developed for several years, some of these have been applied to ocean state sampling. Adaptive sampling deploys limited assets using prior information; then, observation assets are concentrated in areas of greater sampling value, which is very suitable for an extensive and dynamic marine environment. The improved resolution allows numerical models to be used on mobile platforms. However, the existing adaptive sampling framework for mobile platforms lacks regular interaction with the numerical model. And the observation scheme is easy to deviate from the optimal. This study sets up a closed-loop adaptive sampling framework for mobile platforms that realizes the optimization of model → sampling → model. Linking coupled model with the sampling points of the mobile platforms, the adaptive method configures key sampling locations to determine when and where the sampling schemes are adjusted. With the aid of a coupled model, we selected an optimization algorithm for the framework and simulated the process under the twin experimental framework. This research provides theoretical technical support for the combination of model and mobile sampling platforms.
Abstract
There are multiple reasons as to why a precipitation gauge would report erroneous observations. Systematic errors relating to the measuring apparatus or resulting from observational limitations due to environmental factors (e.g., wind-induced undercatch or wetting losses) can be quantified and potentially corrected within a gauge dataset. Other challenges can arise from instrumentation malfunctions, such as clogging, poor siting, and software issues. Instrumentation malfunctions are challenging to quantify as most gauge quality control (QC) schemes focus on the current observation and not on whether the gauge has an inherent issue that would likely require maintenance of the gauge. This study focuses on the development of a temporal QC scheme to identify the likelihood of an instrumentation malfunction through the examination of hourly gauge observations and associated QC designations. The analyzed gauge performance resulted in a temporal QC classification using one of three categories: GOOD, SUSP, and BAD. The temporal QC scheme also accounts for and provides an additional designation when a significant percentage of gauge observations and associated hourly QC were influenced by meteorological factors (e.g., the inability to properly measure winter precipitation). Findings showed a consistent percentage of gauges that were classified as BAD through the running 7-day (2.9%) and 30-day (4.4%) analyses. Verification of select gauges demonstrated how the temporal QC algorithm captured different forms of instrumental-based systematic errors that influenced gauge observations. Results from this study can benefit the identification of degraded performance at gauge sites prior to scheduled routine maintenance.
Significance Statement
This study proposes a scheme that quality controls rain gauges based on its performance over a running history of hourly observational data and quality control flags to identify gauges that likely have an instrumentation malfunction. Findings from this study show the potential of identifying gauges that are impacted by issues such as clogging, software errors, and poor gauge siting. This study also highlights the challenges of distinguishing between erroneous gauge observations based on an instrumentation malfunction versus erroneous observations that were the result of an environmental factor that influence the gauge observation or its quality control classification, such as winter precipitation or virga.
Abstract
There are multiple reasons as to why a precipitation gauge would report erroneous observations. Systematic errors relating to the measuring apparatus or resulting from observational limitations due to environmental factors (e.g., wind-induced undercatch or wetting losses) can be quantified and potentially corrected within a gauge dataset. Other challenges can arise from instrumentation malfunctions, such as clogging, poor siting, and software issues. Instrumentation malfunctions are challenging to quantify as most gauge quality control (QC) schemes focus on the current observation and not on whether the gauge has an inherent issue that would likely require maintenance of the gauge. This study focuses on the development of a temporal QC scheme to identify the likelihood of an instrumentation malfunction through the examination of hourly gauge observations and associated QC designations. The analyzed gauge performance resulted in a temporal QC classification using one of three categories: GOOD, SUSP, and BAD. The temporal QC scheme also accounts for and provides an additional designation when a significant percentage of gauge observations and associated hourly QC were influenced by meteorological factors (e.g., the inability to properly measure winter precipitation). Findings showed a consistent percentage of gauges that were classified as BAD through the running 7-day (2.9%) and 30-day (4.4%) analyses. Verification of select gauges demonstrated how the temporal QC algorithm captured different forms of instrumental-based systematic errors that influenced gauge observations. Results from this study can benefit the identification of degraded performance at gauge sites prior to scheduled routine maintenance.
Significance Statement
This study proposes a scheme that quality controls rain gauges based on its performance over a running history of hourly observational data and quality control flags to identify gauges that likely have an instrumentation malfunction. Findings from this study show the potential of identifying gauges that are impacted by issues such as clogging, software errors, and poor gauge siting. This study also highlights the challenges of distinguishing between erroneous gauge observations based on an instrumentation malfunction versus erroneous observations that were the result of an environmental factor that influence the gauge observation or its quality control classification, such as winter precipitation or virga.
Abstract
High frequency wind measurements from Saildrone autonomous surface vehicles are used to calculate wind stress in the tropical East Pacific. Comparison between direct covariance (DC) and bulk wind stress estimates demonstrates very good agreement. Building on previouswork that showed the bulk input data was reliable, our results lend credibility to the DC estimates. Wind flow distortion by Saildrones is comparable to or smaller than other platforms. Motion correction results in realistic wind spectra, albeit with signatures of swell-coherent wind fluctuations that may be unrealistically strong. Fractional differences between DC and bulk wind stress magnitude are largest at wind speeds below 4 m s−1. The size of this effect, however, depends on choice of stress direction assumptions. Past work has shown the importance of using current-relative (instead of Earth-relative) winds to achieve accurate wind stress magnitude. We show that it is also important for wind stress direction.
Abstract
High frequency wind measurements from Saildrone autonomous surface vehicles are used to calculate wind stress in the tropical East Pacific. Comparison between direct covariance (DC) and bulk wind stress estimates demonstrates very good agreement. Building on previouswork that showed the bulk input data was reliable, our results lend credibility to the DC estimates. Wind flow distortion by Saildrones is comparable to or smaller than other platforms. Motion correction results in realistic wind spectra, albeit with signatures of swell-coherent wind fluctuations that may be unrealistically strong. Fractional differences between DC and bulk wind stress magnitude are largest at wind speeds below 4 m s−1. The size of this effect, however, depends on choice of stress direction assumptions. Past work has shown the importance of using current-relative (instead of Earth-relative) winds to achieve accurate wind stress magnitude. We show that it is also important for wind stress direction.
Abstract
The ocean, with its low albedo and vast thermal inertia, plays key roles in the climate system, including absorbing massive amounts of heat as atmospheric greenhouse gas concentrations rise. While the Argo array of profiling floats has vastly improved sampling of ocean temperature in the upper half of the global ocean volume since the mid-2000s, they are not sufficient in number to resolve eddy scales in the oceans. However, satellite sea-surface temperature (SST) and sea-surface height (SSH) measurements do resolve these scales. Here we use Random Forest regressions to map ocean heat content anomalies (OHCA) using in situ training data from Argo and other sources on a 7-day × ¼° grid with latitude, longitude, time, SSH, and SST as predictors. The maps display substantial patterns on eddy scales, resolving variations of ocean currents and fronts. During the well sampled Argo period, global integrals of these maps reduce noise relative to estimates based on objective mapping of in situ data alone by roughly a factor of three when compared to time series of CERES (satellite data) top-of-the-atmosphere energy flux measurements and improve correlations of anomalies with CERES on annual time scales. Prior to and early on in the Argo period, when in situ data were sparser, global integrals of these maps retain low variance, and do not relax back to a climatological mean, avoiding potential deficiencies of various methods for infilling data-sparse regions with objective maps by exploiting temporal and spatial patterns of OHCA and its correlations with SST and SSH.
Abstract
The ocean, with its low albedo and vast thermal inertia, plays key roles in the climate system, including absorbing massive amounts of heat as atmospheric greenhouse gas concentrations rise. While the Argo array of profiling floats has vastly improved sampling of ocean temperature in the upper half of the global ocean volume since the mid-2000s, they are not sufficient in number to resolve eddy scales in the oceans. However, satellite sea-surface temperature (SST) and sea-surface height (SSH) measurements do resolve these scales. Here we use Random Forest regressions to map ocean heat content anomalies (OHCA) using in situ training data from Argo and other sources on a 7-day × ¼° grid with latitude, longitude, time, SSH, and SST as predictors. The maps display substantial patterns on eddy scales, resolving variations of ocean currents and fronts. During the well sampled Argo period, global integrals of these maps reduce noise relative to estimates based on objective mapping of in situ data alone by roughly a factor of three when compared to time series of CERES (satellite data) top-of-the-atmosphere energy flux measurements and improve correlations of anomalies with CERES on annual time scales. Prior to and early on in the Argo period, when in situ data were sparser, global integrals of these maps retain low variance, and do not relax back to a climatological mean, avoiding potential deficiencies of various methods for infilling data-sparse regions with objective maps by exploiting temporal and spatial patterns of OHCA and its correlations with SST and SSH.
Abstract
The article compares four lightning detection networks, provides a brief overview of lightning observation data assimilation in numerical weather forecasts, and describes and illustrates the used procedure of lightning location and time assimilation in numerical weather forecasting. Evaluations of absolute errors in temperatures of air at 2 m, humidity at 2 m, air pressure near the surface, wind speed at 10 m, and precipitation are provided for 10 forecasts made in 2020 for days on which intensive thunderstorms were observed in the Krasnodar region of Russia. It has been found that average errors for the forecast area for 24, 48, and 72 h of the forecast decreased for all parameters when assimilation of observed lightning data is used for forecasting. It has been shown that the predicted precipitation field configuration and intensity became closer to references for both areas where thunderstorms were observed and the areas where no thunderstorms occurred.
Abstract
The article compares four lightning detection networks, provides a brief overview of lightning observation data assimilation in numerical weather forecasts, and describes and illustrates the used procedure of lightning location and time assimilation in numerical weather forecasting. Evaluations of absolute errors in temperatures of air at 2 m, humidity at 2 m, air pressure near the surface, wind speed at 10 m, and precipitation are provided for 10 forecasts made in 2020 for days on which intensive thunderstorms were observed in the Krasnodar region of Russia. It has been found that average errors for the forecast area for 24, 48, and 72 h of the forecast decreased for all parameters when assimilation of observed lightning data is used for forecasting. It has been shown that the predicted precipitation field configuration and intensity became closer to references for both areas where thunderstorms were observed and the areas where no thunderstorms occurred.
Abstract
Performance assessments of the Geostationary Lightning Mapper (GLM) are conducted via comparisons with independent observations from both satellite-based sensors and ground-based lightning detection (reference) networks. A key limitation of this evaluation is that the performance of the reference networks is both imperfect and imperfectly known, such that the true performance of GLM can only be estimated. Key GLM performance metrics such as detection efficiency (DE) and false alarm rate (FAR) retrieved through comparison with reference networks are affected by those networks’ own DE, FAR, and spatiotemporal accuracy, as well as the flash matching criteria applied in the analysis. This study presents a Monte Carlo simulation–based inversion technique that is used to quantify how accurately the reference networks can assess GLM performance, as well as suggest the optimal matching criteria for estimating GLM performance. This is accomplished by running simulations that clarify the specific effect of reference network quality (i.e., DE, FAR, spatiotemporal accuracy, and the geographical patterns of these attributes) on the retrieved GLM performance metrics. Baseline reference network statistics are derived from the Earth Networks Global Lightning Network (ENGLN) and the Global Lightning Dataset (GLD360). Geographic simulations indicate that the retrieved GLM DE is underestimated, with absolute errors ranging from 11% to 32%, while the retrieved GLM FAR is overestimated, with absolute errors of approximately 16% to 44%. GLM performance is most severely underestimated in the South Pacific. These results help quantify and bound the actual performance of GLM and the attendant uncertainties when comparing GLM to imperfect reference networks.
Abstract
Performance assessments of the Geostationary Lightning Mapper (GLM) are conducted via comparisons with independent observations from both satellite-based sensors and ground-based lightning detection (reference) networks. A key limitation of this evaluation is that the performance of the reference networks is both imperfect and imperfectly known, such that the true performance of GLM can only be estimated. Key GLM performance metrics such as detection efficiency (DE) and false alarm rate (FAR) retrieved through comparison with reference networks are affected by those networks’ own DE, FAR, and spatiotemporal accuracy, as well as the flash matching criteria applied in the analysis. This study presents a Monte Carlo simulation–based inversion technique that is used to quantify how accurately the reference networks can assess GLM performance, as well as suggest the optimal matching criteria for estimating GLM performance. This is accomplished by running simulations that clarify the specific effect of reference network quality (i.e., DE, FAR, spatiotemporal accuracy, and the geographical patterns of these attributes) on the retrieved GLM performance metrics. Baseline reference network statistics are derived from the Earth Networks Global Lightning Network (ENGLN) and the Global Lightning Dataset (GLD360). Geographic simulations indicate that the retrieved GLM DE is underestimated, with absolute errors ranging from 11% to 32%, while the retrieved GLM FAR is overestimated, with absolute errors of approximately 16% to 44%. GLM performance is most severely underestimated in the South Pacific. These results help quantify and bound the actual performance of GLM and the attendant uncertainties when comparing GLM to imperfect reference networks.