Browse
Abstract
An apparent giant wave event having a maximum trough-to-crest height of 21 m and a maximum zero-upcrossing period of 27 s was recorded by a wave buoy at a nearshore location off the southwestern coast of Australia. It appears as a group of waves that are significantly larger both in height and in period than the waves preceding and following them. This paper reports a multifaceted analysis into the plausibility of the event. We first examine the statistics of the event in relation to the rest of the record, where we look at quantities such as maximum-to-significant wave height ratios, ordered crest–trough statistics, and average wave profiles. We then investigate the kinematics of the buoy, where we look at the relationship between the horizontal and vertical displacements of the buoy, and also attempt to numerically reconstruct the giant event using Boussinesq and nonlinear shallow water equations. Additional analyses are performed on other sea states where at least one of the buoy’s accelerometers reached its maximum limit. Our analysis reveals incompatibilities of the event with known behavior of real waves, leading us to conclude that it was not a real wave event. Wave events similar to the one reported in our study have been reported elsewhere and have sometimes been accepted as real occurrences. Our methods of forensically analyzing the giant wave event should be potentially useful for identifying false rogue wave events in these cases.
Abstract
An apparent giant wave event having a maximum trough-to-crest height of 21 m and a maximum zero-upcrossing period of 27 s was recorded by a wave buoy at a nearshore location off the southwestern coast of Australia. It appears as a group of waves that are significantly larger both in height and in period than the waves preceding and following them. This paper reports a multifaceted analysis into the plausibility of the event. We first examine the statistics of the event in relation to the rest of the record, where we look at quantities such as maximum-to-significant wave height ratios, ordered crest–trough statistics, and average wave profiles. We then investigate the kinematics of the buoy, where we look at the relationship between the horizontal and vertical displacements of the buoy, and also attempt to numerically reconstruct the giant event using Boussinesq and nonlinear shallow water equations. Additional analyses are performed on other sea states where at least one of the buoy’s accelerometers reached its maximum limit. Our analysis reveals incompatibilities of the event with known behavior of real waves, leading us to conclude that it was not a real wave event. Wave events similar to the one reported in our study have been reported elsewhere and have sometimes been accepted as real occurrences. Our methods of forensically analyzing the giant wave event should be potentially useful for identifying false rogue wave events in these cases.
Abstract
Horizontal kinematic properties, such as vorticity, divergence, and lateral strain rate, are estimated from drifter clusters using three approaches. At submesoscale horizontal length scales
Significance Statement
The purpose of this study is to provide insights and guidance for computing horizontal velocity gradients from clusters (i.e., three or more) of Lagrangian surface ocean drifters. The uncertainty in velocity gradient estimates depends strongly on the shape deformation of drifter clusters by the ocean currents. We propose criteria for drifter cluster length scales and aspect ratios to reduce uncertainties and develop ways of estimating the magnitude of the resulting errors. The findings are applied to a real ocean dataset from the Bay of Bengal.
Abstract
Horizontal kinematic properties, such as vorticity, divergence, and lateral strain rate, are estimated from drifter clusters using three approaches. At submesoscale horizontal length scales
Significance Statement
The purpose of this study is to provide insights and guidance for computing horizontal velocity gradients from clusters (i.e., three or more) of Lagrangian surface ocean drifters. The uncertainty in velocity gradient estimates depends strongly on the shape deformation of drifter clusters by the ocean currents. We propose criteria for drifter cluster length scales and aspect ratios to reduce uncertainties and develop ways of estimating the magnitude of the resulting errors. The findings are applied to a real ocean dataset from the Bay of Bengal.
Abstract
This paper describes a remotely monitored buoy that, when deployed in open water prior to freeze up, permits scientists to monitor not only temperature with depth, and hence freeze up and sea ice thickness, but also the progression of sea ice development—e.g., the extent of cover at a given depth as it grows (solid fraction), the brine volume of the ice, and the salinity of the water just below, which is driven by brine expulsion. Microstructure and In situ Salinity and Temperature (MIST) buoys use sensor “ladders” that, in our prototypes, extend to 88 cm below the surface. We collected hourly measurements of surface air temperature and water temperature and electrical impedance every 3 cm to track the seasonal progression of sea ice growth in Elson Lagoon (Utqiaġvik, Alaska) over the 2017/18 ice growth season. The MIST buoy has the potential to collect detailed sea ice microstructural information over time and help scientists monitor all parts of the growth/melt cycle, including not only the freezing process but the effects of meteorological changes, changing snow cover, the interaction of meltwater, and drainage.
Significance Statement
There is a need to better understand how an increasing influx of freshwater, one part of a changing Arctic climate, will affect the development of sea ice. Current instruments can provide information on the growth rate, extent, and thickness of sea ice, but not direct observations of the structure of the ice during freeze up, something that is tied to salinity and local air and water temperature. A first deployment in Elson Lagoon in Utqiaġvik, Alaska, showed promising results; we observed fluctuations in ice temperatures in response to brief warmings in air temperature that resulted in changes in the conductivity, liquid fraction, and brine volume fraction within the ice.
Abstract
This paper describes a remotely monitored buoy that, when deployed in open water prior to freeze up, permits scientists to monitor not only temperature with depth, and hence freeze up and sea ice thickness, but also the progression of sea ice development—e.g., the extent of cover at a given depth as it grows (solid fraction), the brine volume of the ice, and the salinity of the water just below, which is driven by brine expulsion. Microstructure and In situ Salinity and Temperature (MIST) buoys use sensor “ladders” that, in our prototypes, extend to 88 cm below the surface. We collected hourly measurements of surface air temperature and water temperature and electrical impedance every 3 cm to track the seasonal progression of sea ice growth in Elson Lagoon (Utqiaġvik, Alaska) over the 2017/18 ice growth season. The MIST buoy has the potential to collect detailed sea ice microstructural information over time and help scientists monitor all parts of the growth/melt cycle, including not only the freezing process but the effects of meteorological changes, changing snow cover, the interaction of meltwater, and drainage.
Significance Statement
There is a need to better understand how an increasing influx of freshwater, one part of a changing Arctic climate, will affect the development of sea ice. Current instruments can provide information on the growth rate, extent, and thickness of sea ice, but not direct observations of the structure of the ice during freeze up, something that is tied to salinity and local air and water temperature. A first deployment in Elson Lagoon in Utqiaġvik, Alaska, showed promising results; we observed fluctuations in ice temperatures in response to brief warmings in air temperature that resulted in changes in the conductivity, liquid fraction, and brine volume fraction within the ice.
Abstract
Total column H2O is measured by two remote sensing techniques at the Altzomoni Atmospheric Observatory (19°12′N, 98°65′W, 4000 m above sea level), a high-altitude, tropical background site in central Mexico. A ground-based solar absorption FTIR spectrometer that is part of the Network for Detection of Atmospheric Composition Change (NDACC) is used to retrieve water vapor in three spectral regions (6074–6471, 2925–2941, and 1110–1253 cm−1) and is compared to data obtained from a global positioning system (GPS) receiver that is part of the TLALOCNet GPS-meteorological network. Strong correlations are obtained between the coincident hourly means from the three FTIR products and small relative bias and correction factors could be determined for each when compared to the more consistent GPS data. Retrievals from the 2925–2941 cm−1 spectral region have the highest correlation with GPS [coefficient of determination (R 2) = 0.998, standard deviation (STD) = 0.18 cm (78.39%), mean difference = 0.04 cm (8.33%)], although the other products are also highly correlated [R 2 ≥ 0.99, STD ≤ 0.20 cm (<90%), mean difference ≤ 0.1 cm (<24%)]. Clear-sky dry bias (CSDB) values are reduced to <10% (<0.20 cm) when coincident hourly means are used in the comparison. The use of GPS and FTIR water vapor products simultaneously leads to a more complete and better description of the diurnal and seasonal cycles of water vapor. We describe the water vapor climatology with both complementary datasets, nevertheless, pointing out the importance of considering the clear-sky dry bias arising from the large diurnal and seasonal variability of water vapor at this high-altitude tropical site.
Abstract
Total column H2O is measured by two remote sensing techniques at the Altzomoni Atmospheric Observatory (19°12′N, 98°65′W, 4000 m above sea level), a high-altitude, tropical background site in central Mexico. A ground-based solar absorption FTIR spectrometer that is part of the Network for Detection of Atmospheric Composition Change (NDACC) is used to retrieve water vapor in three spectral regions (6074–6471, 2925–2941, and 1110–1253 cm−1) and is compared to data obtained from a global positioning system (GPS) receiver that is part of the TLALOCNet GPS-meteorological network. Strong correlations are obtained between the coincident hourly means from the three FTIR products and small relative bias and correction factors could be determined for each when compared to the more consistent GPS data. Retrievals from the 2925–2941 cm−1 spectral region have the highest correlation with GPS [coefficient of determination (R 2) = 0.998, standard deviation (STD) = 0.18 cm (78.39%), mean difference = 0.04 cm (8.33%)], although the other products are also highly correlated [R 2 ≥ 0.99, STD ≤ 0.20 cm (<90%), mean difference ≤ 0.1 cm (<24%)]. Clear-sky dry bias (CSDB) values are reduced to <10% (<0.20 cm) when coincident hourly means are used in the comparison. The use of GPS and FTIR water vapor products simultaneously leads to a more complete and better description of the diurnal and seasonal cycles of water vapor. We describe the water vapor climatology with both complementary datasets, nevertheless, pointing out the importance of considering the clear-sky dry bias arising from the large diurnal and seasonal variability of water vapor at this high-altitude tropical site.
Abstract
Simulated weather time series are often used in engineering and research practice to assess radar systems behavior and/or to evaluate the performance of novel techniques. There are two main approaches to simulating weather time series. One is based on summing individual returns from a large number of distributed weather particles to create a cumulative return. The other is aimed at creating simulated random signals based on the predetermined values of radar observables and is of interest herein. So far, several methods to simulate weather time series, using the latter approach, have been proposed. All of these methods are based on applying the inverse discrete Fourier transform to the spectral model with added random fluctuations. To meet the desired simulation accuracy, such an approach typically requires generating the number of samples that is larger than the base sample number due to the discrete Fourier transform properties. In that regard, a novel method to determine simulation length is proposed. It is based on a detailed theoretical development that demonstrates the exact source of errors incurred by this approach. Furthermore, a simple method for time series simulation that is based on the autocorrelation matrix exists. This method neither involves manipulations in the spectral domain nor requires generating the number of samples larger than the base sample number. Herein, this method is suggested for weather time series simulation and its accuracy and efficiency are analyzed and compared to the spectral-based approach.
Significance Statement
All research articles published so far on the topic of weather time series simulation propose the use of inverse discrete Fourier transform (IDFT) when based on the desired Doppler moment values. Herein, a detailed theoretical development that demonstrates the exact source of errors incurred by this approach is presented. Also, a novel method to determine the simulation length that is based on the theoretical error computation is proposed. As an alternative, a computationally efficient general method (not using IDFT) previously developed for the simulation of sequences with desired properties is suggested for weather time series simulation. It is demonstrated that the latter method produces accurate results within overall shorter computational times. Moreover, it is shown that the use of graphics processing unit (GPU), ubiquitous in modern computers, significantly reduces computational times compared to the sole use of central processing unit (CPU) for all simulation-related calculations.
Abstract
Simulated weather time series are often used in engineering and research practice to assess radar systems behavior and/or to evaluate the performance of novel techniques. There are two main approaches to simulating weather time series. One is based on summing individual returns from a large number of distributed weather particles to create a cumulative return. The other is aimed at creating simulated random signals based on the predetermined values of radar observables and is of interest herein. So far, several methods to simulate weather time series, using the latter approach, have been proposed. All of these methods are based on applying the inverse discrete Fourier transform to the spectral model with added random fluctuations. To meet the desired simulation accuracy, such an approach typically requires generating the number of samples that is larger than the base sample number due to the discrete Fourier transform properties. In that regard, a novel method to determine simulation length is proposed. It is based on a detailed theoretical development that demonstrates the exact source of errors incurred by this approach. Furthermore, a simple method for time series simulation that is based on the autocorrelation matrix exists. This method neither involves manipulations in the spectral domain nor requires generating the number of samples larger than the base sample number. Herein, this method is suggested for weather time series simulation and its accuracy and efficiency are analyzed and compared to the spectral-based approach.
Significance Statement
All research articles published so far on the topic of weather time series simulation propose the use of inverse discrete Fourier transform (IDFT) when based on the desired Doppler moment values. Herein, a detailed theoretical development that demonstrates the exact source of errors incurred by this approach is presented. Also, a novel method to determine the simulation length that is based on the theoretical error computation is proposed. As an alternative, a computationally efficient general method (not using IDFT) previously developed for the simulation of sequences with desired properties is suggested for weather time series simulation. It is demonstrated that the latter method produces accurate results within overall shorter computational times. Moreover, it is shown that the use of graphics processing unit (GPU), ubiquitous in modern computers, significantly reduces computational times compared to the sole use of central processing unit (CPU) for all simulation-related calculations.
Abstract
We compare two seemingly different methods of estimating random error statistics (uncertainties) of observations, the three-cornered hat (3CH) method and Desroziers method, and show several examples of estimated uncertainties of COSMIC-2 (C2) radio occultation (RO) observations. The two methods yield similar results, attesting to the validity of both. The small differences provide insight into the sensitivity of the methods to the assumptions and computational details. These estimates of RO error statistics differ considerably from several RO error models used by operational weather forecast centers, suggesting that the impact of RO observations on forecasts can be improved by adjusting the RO error models to agree more closely with the RO error statistics. Both methods show RO uncertainty estimates that vary with latitude. In the troposphere, uncertainties are higher in the tropics than in the subtropics and middle latitudes. In the upper stratosphere–lower mesosphere, we find the reverse, with tropical uncertainties slightly less than in the subtropics and higher latitudes. The uncertainty estimates from the two techniques also show similar variations between a 31-day period during Northern Hemisphere tropical cyclone season (16 August–15 September 2020) and a month near the vernal equinox (April 2021). Finally, we find a relationship between the vertical variation of the C2 estimated uncertainties and atmospheric variability, as measured by the standard deviation of the C2 sample. The convergence of the error estimates and the standard deviations above 40 km indicates a lessening impact of assimilating RO above this level.
Significance Statement
Uncertainties of observations are of general interest and their knowledge is important for assimilation in numerical weather prediction models. This paper compares two methods of estimating these uncertainties and shows that they give nearly identical results under certain conditions. The estimation of the COSMIC-2 bending angle uncertainties and how they compare to the assumed bending angle error models in several operational weather centers suggests that there is an opportunity for obtaining improved impact of RO observations in numerical model forecasts. Finally, the relationship between the COSMIC-2 bending angle errors and atmospheric variability provides insight into the sources of RO observational uncertainties.
Abstract
We compare two seemingly different methods of estimating random error statistics (uncertainties) of observations, the three-cornered hat (3CH) method and Desroziers method, and show several examples of estimated uncertainties of COSMIC-2 (C2) radio occultation (RO) observations. The two methods yield similar results, attesting to the validity of both. The small differences provide insight into the sensitivity of the methods to the assumptions and computational details. These estimates of RO error statistics differ considerably from several RO error models used by operational weather forecast centers, suggesting that the impact of RO observations on forecasts can be improved by adjusting the RO error models to agree more closely with the RO error statistics. Both methods show RO uncertainty estimates that vary with latitude. In the troposphere, uncertainties are higher in the tropics than in the subtropics and middle latitudes. In the upper stratosphere–lower mesosphere, we find the reverse, with tropical uncertainties slightly less than in the subtropics and higher latitudes. The uncertainty estimates from the two techniques also show similar variations between a 31-day period during Northern Hemisphere tropical cyclone season (16 August–15 September 2020) and a month near the vernal equinox (April 2021). Finally, we find a relationship between the vertical variation of the C2 estimated uncertainties and atmospheric variability, as measured by the standard deviation of the C2 sample. The convergence of the error estimates and the standard deviations above 40 km indicates a lessening impact of assimilating RO above this level.
Significance Statement
Uncertainties of observations are of general interest and their knowledge is important for assimilation in numerical weather prediction models. This paper compares two methods of estimating these uncertainties and shows that they give nearly identical results under certain conditions. The estimation of the COSMIC-2 bending angle uncertainties and how they compare to the assumed bending angle error models in several operational weather centers suggests that there is an opportunity for obtaining improved impact of RO observations in numerical model forecasts. Finally, the relationship between the COSMIC-2 bending angle errors and atmospheric variability provides insight into the sources of RO observational uncertainties.
Abstract
Better predictions of global warming can be enabled by tuning legacy and current computer simulations to Earth radiation budget (ERB) measurements. Since the 1970s, such orbital results exist, and the next-generation instruments such as one called “Libera” are in production. Climate communities have requested that new ERB observing system missions like these have calibration accuracy obtaining significantly improved calibration SI traceability and stability. This is to prevent untracked instrument calibration drifts that could lead to false conclusions on climate change. Based on experience from previous ERB missions, the alternative concept presented here utilizes directly viewing solar calibration, for cloud-size Earth measurement resolution at <1% accuracy. However, it neglects complex already used calibration technology like solar diffusers and onboard lights, allowing new lower cost/risk unconsidered spectral characterizing concepts to be introduced for today’s technology. Also in contrast to near future ERB concepts already being produced, this enables in-flight wavelength dependent calibration of Earth-observing telescopes using direct solar views, through narrowband filters continuously characterized on-orbit.
Abstract
Better predictions of global warming can be enabled by tuning legacy and current computer simulations to Earth radiation budget (ERB) measurements. Since the 1970s, such orbital results exist, and the next-generation instruments such as one called “Libera” are in production. Climate communities have requested that new ERB observing system missions like these have calibration accuracy obtaining significantly improved calibration SI traceability and stability. This is to prevent untracked instrument calibration drifts that could lead to false conclusions on climate change. Based on experience from previous ERB missions, the alternative concept presented here utilizes directly viewing solar calibration, for cloud-size Earth measurement resolution at <1% accuracy. However, it neglects complex already used calibration technology like solar diffusers and onboard lights, allowing new lower cost/risk unconsidered spectral characterizing concepts to be introduced for today’s technology. Also in contrast to near future ERB concepts already being produced, this enables in-flight wavelength dependent calibration of Earth-observing telescopes using direct solar views, through narrowband filters continuously characterized on-orbit.
Abstract
The Ka-band Radar Interferometer (KaRIn) on the Surface Water and Ocean Topography (SWOT) satellite will revolutionize satellite altimetry by measuring sea surface height (SSH) with unprecedented accuracy and resolution across two 50-km swaths separated by a 20-km gap. The original plan to provide an SSH product with a footprint diameter of 1 km has changed to providing two SSH data products with footprint diameters of 0.5 and 2 km. The swath-averaged standard deviations and wavenumber spectra of the uncorrelated measurement errors for these footprints are derived from the SWOT science requirements that are expressed in terms of the wavenumber spectrum of SSH after smoothing with a filter cutoff wavelength of 15 km. The availability of two-dimensional fields of SSH within the measurement swaths will provide the first spaceborne estimates of instantaneous surface velocity and vorticity through the geostrophic equations. The swath-averaged standard deviations of the noise in estimates of velocity and vorticity derived by propagation of the uncorrelated SSH measurement noise through the finite difference approximations of the derivatives are shown to be too large for the SWOT data products to be used directly in most applications, even for the coarsest footprint diameter of 2 km. It is shown from wavenumber spectra and maps constructed from simulated SWOT data that additional smoothing will be required for most applications of SWOT estimates of velocity and vorticity. Equations are presented for the swath-averaged standard deviations and wavenumber spectra of residual noise in SSH and geostrophically computed velocity and vorticity after isotropic two-dimensional smoothing for any user-defined smoother and filter cutoff wavelength of the smoothing.
Abstract
The Ka-band Radar Interferometer (KaRIn) on the Surface Water and Ocean Topography (SWOT) satellite will revolutionize satellite altimetry by measuring sea surface height (SSH) with unprecedented accuracy and resolution across two 50-km swaths separated by a 20-km gap. The original plan to provide an SSH product with a footprint diameter of 1 km has changed to providing two SSH data products with footprint diameters of 0.5 and 2 km. The swath-averaged standard deviations and wavenumber spectra of the uncorrelated measurement errors for these footprints are derived from the SWOT science requirements that are expressed in terms of the wavenumber spectrum of SSH after smoothing with a filter cutoff wavelength of 15 km. The availability of two-dimensional fields of SSH within the measurement swaths will provide the first spaceborne estimates of instantaneous surface velocity and vorticity through the geostrophic equations. The swath-averaged standard deviations of the noise in estimates of velocity and vorticity derived by propagation of the uncorrelated SSH measurement noise through the finite difference approximations of the derivatives are shown to be too large for the SWOT data products to be used directly in most applications, even for the coarsest footprint diameter of 2 km. It is shown from wavenumber spectra and maps constructed from simulated SWOT data that additional smoothing will be required for most applications of SWOT estimates of velocity and vorticity. Equations are presented for the swath-averaged standard deviations and wavenumber spectra of residual noise in SSH and geostrophically computed velocity and vorticity after isotropic two-dimensional smoothing for any user-defined smoother and filter cutoff wavelength of the smoothing.
Abstract
Ocean wave measurements are of major importance for a number of applications including climate studies, ship routing, marine engineering, safety at sea, and coastal risk management. Depending on the scales and regions of interest, a variety of data sources may be considered (e.g., in situ data, Voluntary Observing Ship observations, altimeter records, numerical wave models), each one with its own characteristics in terms of sampling frequency, spatial coverage, accuracy, and cost. To combine multiple source of wave information (e.g., for data assimilation scheme in numerical weather prediction models), the error characteristics of each measurement system need to be defined. In this study, we use the triple collocation technique to estimate the random error variance of significant wave heights from a comprehensive collection of collocated in situ, altimeter, and model data. The in situ dataset is a selection of 122 platforms provided by the Copernicus Marine Service In Situ Thematic Center. The altimeter dataset is the ESA Sea State CCI version1 L2P product. The model dataset is the WW3-LOPS hindcast forced with bias-corrected ERA5 winds and an adjusted T475 parameterization of wave generation and dissipation. Compared to previous similar analyses, the extensive (∼250 000 entries) triple collocation dataset generated for this study provides some new insights on the error variability associated to differences in in situ platforms, satellite missions, sea state conditions, and seasonal variability.
Abstract
Ocean wave measurements are of major importance for a number of applications including climate studies, ship routing, marine engineering, safety at sea, and coastal risk management. Depending on the scales and regions of interest, a variety of data sources may be considered (e.g., in situ data, Voluntary Observing Ship observations, altimeter records, numerical wave models), each one with its own characteristics in terms of sampling frequency, spatial coverage, accuracy, and cost. To combine multiple source of wave information (e.g., for data assimilation scheme in numerical weather prediction models), the error characteristics of each measurement system need to be defined. In this study, we use the triple collocation technique to estimate the random error variance of significant wave heights from a comprehensive collection of collocated in situ, altimeter, and model data. The in situ dataset is a selection of 122 platforms provided by the Copernicus Marine Service In Situ Thematic Center. The altimeter dataset is the ESA Sea State CCI version1 L2P product. The model dataset is the WW3-LOPS hindcast forced with bias-corrected ERA5 winds and an adjusted T475 parameterization of wave generation and dissipation. Compared to previous similar analyses, the extensive (∼250 000 entries) triple collocation dataset generated for this study provides some new insights on the error variability associated to differences in in situ platforms, satellite missions, sea state conditions, and seasonal variability.