Browse

You are looking at 71 - 80 of 3,999 items for :

  • Journal of Atmospheric and Oceanic Technology x
  • Refine by Access: Content accessible to me x
Clear All
Rolf G. Lueck

Abstract

This manuscript provides (i) the statistical uncertainty of a shear spectrum and (ii) a new universal shear spectrum, and (iii) shows how these are combined to quantify the quality of a shear spectrum. The data from four collocated shear probes, described in Part I, are used to estimate the spectra of shear, Ψ(k), for wavenumbers k ≥ 2 cpm, from data lengths of 1.0 to 50.5 m, using Fourier transform (FT) segments of 0.5 m length. The differences of the logarithm of pairs of simultaneous shear spectra are stationary, distributed normally, independent of the rate of dissipation, and only weakly dependent on wavenumber. The variance of the logarithm of an individual spectrum, σ ln Ψ 2 , equals one-half of the variance of these differences and is σ ln Ψ 2 = 1.25 N f 7 / 9 , where Nf is the number of FT segments used to estimate the spectrum. This term σ lnΨ provides the statistical basis for constructing the confidence interval of the logarithm of a spectrum, and thus, the spectrum itself. A universal spectrum of turbulence shear is derived from the nondimensionalization of 14 600 spectra estimated from 5 m segments of data. This spectrum differs from the Nasmyth spectrum and from the spectrum of Panchev and Kesich by 8% near its peak, and is approximated to within 1% by a new analytic equation. The difference between the logarithms of a measured and a universal spectrum, together with the confidence interval of a spectrum, provides the statistical basis for quantifying the quality of a measured shear (and velocity) spectrum, and the quality of a dissipation estimate that is derived from the spectrum.

Significance Statement

The results reported here can be used to estimate the statistical uncertainty of a spectrum of turbulent shear or velocity that is derived from a finite number of discrete Fourier transform segments, and they can be used to quantify the quality of a spectrum.

Open access
Tingting Qian
,
Junhong Wei
,
Yongqiang Sun
,
Yinghui Lu
, and
James H. Ruppert Jr.

Abstract

This paper investigates the limitation in calculating the vertical wavelength of downward phase propagating gravity waves from the vertical fluctuation of idealized radiosonde balloons in a homogeneous background environment. The wave signals are artificially observed by an idealized weather balloon with a constant ascent rate. The apparent vertical wavelengths obtained from the moving radiosonde balloon are compared to the true vertical wavelength obtained from the dispersion relation, both in the no-wind case and in the constant-zonal-flow case. The node method and FFT method are employed to calculate the apparent vertical wavelength from the sounding profile. The difference between the node apparent vertical wavelength and the true vertical wavelength is attributed to the fact that the ascent rate of the balloon and the downward phase speed induce a strong Doppler-shifting bias on the apparent vertical wavelength from the observation records. The difference between the FFT apparent vertical wavelength and the true vertical wavelength includes both the Doppler-shifting bias and the mathematical bias. The extent to which the apparent vertical wavelength is reliable is discussed. The Coriolis parameter has negligible effects on the comparison between the true vertical wavelength and the apparent one.

Significance Statement

The purpose of this study is to discuss the Doppler-shifting bias induced by the ascent rate of radiosonde balloon when measuring the apparent vertical wavelengths of downward phase propagating gravity waves from the vertical fluctuation of idealized radiosonde balloons. This is an easily omitted problem. However, it can dramatically affect the gravity wave diagnosis when the ascent rate profile is treated as a quasi-instantaneous data. Further, such uncertainty could lead to remarkable errors in other derived wave propagating properties (e.g., phase velocity, which is the key input parameter in gravity wave parameterization).

Open access
Lazaros Oreopoulos
,
Nayeong Cho
,
Dongmin Lee
,
Matthew Lebsock
, and
Zhibo Zhang

Abstract

We evaluate two stochastic subcolumn generators used in GCMs to emulate subgrid cloud variability enabling comparisons with satellite observations and simulations of certain physical processes. Our evaluation necessitated the creation of a reference observational dataset that resolves horizontal and vertical cloud variability. The dataset combines two CloudSat cloud products that resolve two-dimensional cloud optical depth variability of liquid, ice, and mixed-phase clouds when blended at ∼200 m vertical and ∼2 km horizontal scales. Upon segmenting the dataset to individual “scenes,” mean profiles of the cloud fields are passed as input to generators that produce scene-level cloud subgrid variability. The assessment of generator performance at the scale of individual scenes and in a mean sense is largely based on inferred joint histograms that partition cloud fraction within predetermined combinations of cloud-top pressure–cloud optical thickness ranges. Our main finding is that both generators tend to underestimate optically thin clouds, while one of them also tends to overestimate some cloud types of moderate and high optical thickness. Associated radiative flux errors are also calculated by applying a simple transformation to the cloud fraction histogram errors, and are found to approach values almost as high as 3 W m−2 for the cloud radiative effect in the shortwave part of the spectrum.

Significance Statement

The purpose of the paper is to assess the realism of relatively simple ways of producing fine-scale cloud variability in global models from coarsely resolved cloud properties. The assessment is achieved via comparisons to observed cloud fields where the fine-scale variability is known in both the horizontal and vertical directions. Our results show that while the generators have considerable skill, they still suffer from consistent deficiencies that need to be addressed with further development guided by appropriate observations.

Restricted access
Meng-Yuan Chen
,
Ching-Lun Su
,
Yuan-Han Chang
, and
Yen-Hsyang Chu

Abstract

In this study, a data processing method based on the empirical mode decomposition (EMD) of Hilbert–Huang transform (HHT) is developed at Chung-Li VHF radar to identify and remove the aircraft clutter for improving the atmospheric wind measurement. The EMD decomposes the echo signals into the so-called intrinsic mode functions (IMFs) in the time domain, and then the aircraft clutter that is represented by a number of specific IMFs can be identified in the radar returns and separated from the clear-air echoes that are observed concurrently by the VHF radar. The identified clutter is validated by using the aircraft information collected by the Automatic Dependent Surveillance–Broadcast (ADS-B) receiver. It shows that the proposed algorithm can detect the aircraft echoes that are mixed with the clear-air echoes. After implementing the algorithm on the experimental data, the atmospheric horizontal wind velocities are estimated after the aircraft clutter is removed. To evaluate the degree of the improvement of the horizontal wind measurement, a comparison in the horizontal wind velocities between Chung-Li VHF radar and a collocated UHF wind profiler radar is made. The results show that the use of EMD and the proposed data processing method can effectively reduce the uncertainty and substantially improve the precision and reliability of the horizontal wind measurement.

Restricted access
Noureddine Semane
,
Richard Anthes
,
Jeremiah Sjoberg
,
Sean Healy
, and
Benjamin Ruston

Abstract

We compare two seemingly different methods of estimating random error statistics (uncertainties) of observations, the three-cornered hat (3CH) method and Desroziers method, and show several examples of estimated uncertainties of COSMIC-2 (C2) radio occultation (RO) observations. The two methods yield similar results, attesting to the validity of both. The small differences provide insight into the sensitivity of the methods to the assumptions and computational details. These estimates of RO error statistics differ considerably from several RO error models used by operational weather forecast centers, suggesting that the impact of RO observations on forecasts can be improved by adjusting the RO error models to agree more closely with the RO error statistics. Both methods show RO uncertainty estimates that vary with latitude. In the troposphere, uncertainties are higher in the tropics than in the subtropics and middle latitudes. In the upper stratosphere–lower mesosphere, we find the reverse, with tropical uncertainties slightly less than in the subtropics and higher latitudes. The uncertainty estimates from the two techniques also show similar variations between a 31-day period during Northern Hemisphere tropical cyclone season (16 August–15 September 2020) and a month near the vernal equinox (April 2021). Finally, we find a relationship between the vertical variation of the C2 estimated uncertainties and atmospheric variability, as measured by the standard deviation of the C2 sample. The convergence of the error estimates and the standard deviations above 40 km indicates a lessening impact of assimilating RO above this level.

Significance Statement

Uncertainties of observations are of general interest and their knowledge is important for assimilation in numerical weather prediction models. This paper compares two methods of estimating these uncertainties and shows that they give nearly identical results under certain conditions. The estimation of the COSMIC-2 bending angle uncertainties and how they compare to the assumed bending angle error models in several operational weather centers suggests that there is an opportunity for obtaining improved impact of RO observations in numerical model forecasts. Finally, the relationship between the COSMIC-2 bending angle errors and atmospheric variability provides insight into the sources of RO observational uncertainties.

Open access
G. Matthews

Abstract

Better predictions of global warming can be enabled by tuning legacy and current computer simulations to Earth radiation budget (ERB) measurements. Since the 1970s, such orbital results exist, and the next-generation instruments such as one called “Libera” are in production. Climate communities have requested that new ERB observing system missions like these have calibration accuracy obtaining significantly improved calibration SI traceability and stability. This is to prevent untracked instrument calibration drifts that could lead to false conclusions on climate change. Based on experience from previous ERB missions, the alternative concept presented here utilizes directly viewing solar calibration, for cloud-size Earth measurement resolution at <1% accuracy. However, it neglects complex already used calibration technology like solar diffusers and onboard lights, allowing new lower cost/risk unconsidered spectral characterizing concepts to be introduced for today’s technology. Also in contrast to near future ERB concepts already being produced, this enables in-flight wavelength dependent calibration of Earth-observing telescopes using direct solar views, through narrowband filters continuously characterized on-orbit.

Open access
Dudley B. Chelton
,
Roger M. Samelson
, and
J. Thomas Farrar

Abstract

The Ka-band Radar Interferometer (KaRIn) on the Surface Water and Ocean Topography (SWOT) satellite will revolutionize satellite altimetry by measuring sea surface height (SSH) with unprecedented accuracy and resolution across two 50-km swaths separated by a 20-km gap. The original plan to provide an SSH product with a footprint diameter of 1 km has changed to providing two SSH data products with footprint diameters of 0.5 and 2 km. The swath-averaged standard deviations and wavenumber spectra of the uncorrelated measurement errors for these footprints are derived from the SWOT science requirements that are expressed in terms of the wavenumber spectrum of SSH after smoothing with a filter cutoff wavelength of 15 km. The availability of two-dimensional fields of SSH within the measurement swaths will provide the first spaceborne estimates of instantaneous surface velocity and vorticity through the geostrophic equations. The swath-averaged standard deviations of the noise in estimates of velocity and vorticity derived by propagation of the uncorrelated SSH measurement noise through the finite difference approximations of the derivatives are shown to be too large for the SWOT data products to be used directly in most applications, even for the coarsest footprint diameter of 2 km. It is shown from wavenumber spectra and maps constructed from simulated SWOT data that additional smoothing will be required for most applications of SWOT estimates of velocity and vorticity. Equations are presented for the swath-averaged standard deviations and wavenumber spectra of residual noise in SSH and geostrophically computed velocity and vorticity after isotropic two-dimensional smoothing for any user-defined smoother and filter cutoff wavelength of the smoothing.

Open access
Luke Kachelein
,
Bruce D. Cornuelle
,
Sarah T. Gille
, and
Matthew R. Mazloff

Abstract

A novel tidal analysis package (red_tide) has been developed to characterize low-amplitude non-phase-locked tidal energy and dominant tidal peaks in noisy, irregularly sampled, or gap-prone time series. We recover tidal information by expanding conventional harmonic analysis to include prior information and assumptions about the statistics of a process, such as the assumption of a spectrally colored background, treated as nontidal noise. This is implemented using Bayesian maximum posterior estimation and assuming Gaussian prior distributions. We utilize a hierarchy of test cases, including synthetic data and observations, to evaluate this method and its relevance to analysis of data with a tidal component and an energetic nontidal background. Analysis of synthetic test cases shows that the methodology provides robust tidal estimates. When the background energy spectrum is nearly spectrally white, red_tide results replicate results from ordinary least squares (OLS) commonly used in other tidal packages. When background spectra are red (a spectral slope of −2 or steeper), red_tide’s estimates represent a measurable improvement over OLS. The approach highlights the presence of tidal variability and low-amplitude constituents in observations by allowing arbitrarily configurable fitted frequencies and prior statistics that constrain solutions. These techniques have been implemented in MATLAB in order to analyze tidal data with non-phase-locked components and an energetic background that pose challenges to the commonly used OLS approach.

Open access
Emy Alerskans
,
Cristian Lussana
,
Thomas N. Nipen
, and
Ivar A. Seierstad

Abstract

Crowdsourced meteorological observations are becoming more prevalent and in some countries their spatial resolution already far exceeds that of traditional networks. However, due to the larger uncertainty associated with these observations, quality control (QC) is an essential step. Spatial QC methods are especially well suited for such dense networks since they utilize information from nearby stations. The performance of such methods usually depends on the choice of their parameters. There is, however, currently no specific procedure on how to choose the optimal settings of such spatial QC methods. In this study we present a framework for tuning a spatial QC method for a dense network of meteorological observations. The method uses artificial errors in order to perturb the observations to simulate the effect of having errors. A cost function, based on the hit and false alarm rate, for optimizing the spatial QC method is introduced. The parameters of the spatial QC method are then tuned such that the cost function is optimized. The application of the framework to the tuning of a spatial QC method for a dense network of crowdsourced observations in Denmark is presented. Our findings show that the optimal settings vary with the error magnitude, time of day, and station density. Furthermore, we show that when the station network is sparse, a better performance of the spatial QC method can be obtained by including crowdsourced observations from another denser network.

Open access
Zhijin Qiu
,
Tong Hu
,
Bo Wang
,
Jing Zou
, and
Zhiqian Li

Abstract

The evaporation duct is an abnormal refractive phenomenon with wide distribution and frequency occurrence at the boundary between the atmosphere and the ocean, which directly affects electromagnetic wave propagation. In recent years, the use of meteorological and hydrological data to predict the evaporation duct height has become an emerging and promising approach. There are some evaporation duct models that have been proposed based on the Monin–Obukhov similarity theory. However, each model adopts different stability functions and roughness length parameterization methods, so the prediction accuracies are different under different environmental conditions. To improve the prediction accuracy of the evaporation duct under different environmental conditions, a model selection optimization method (MSOM) of the evaporation duct model is proposed based on sensitivity analysis. According to the sensitivity of each model to input parameters analyzed by the sensor observation accuracy, curve graph, and Sobol sensitivity, the model input parameters are divided into several intervals. Then the optimization model is selected in different intervals. The model was established using numerical simulation data from local areas in the South China Sea, and its accuracy was verified by the observational data from the offshore observation platform located in the South China Sea. The results show that the MSOM can effectively improve the prediction accuracy of the evaporation duct height. Under unstable conditions, the maximum relative error is reduced by 7.1%, and under stable conditions, the relative error is reduced by 10.7%.

Significance Statement

The evaporation duct height has a significant effect on marine radar or wireless apparatus applications. To obtain the evaporation duct height, there are some evaporation duct models that have been proposed. However, different evaporation duct models are applicable to different meteorological and hydrological environments. A single model cannot achieve accurate evaporation duct height predictions in all environments. We propose a model selection optimization method of the evaporation duct model based on sensitivity analysis. This method can dynamically select the optimal model according to different meteorological and hydrological environment, and improve the prediction accuracy of the evaporation duct height. Under unstable conditions, the maximum relative error is reduced by 7.1%, and under stable conditions, the relative error is reduced by 10.7%.

Open access