Browse

You are looking at 91 - 100 of 5,278 items for :

  • Journal of Atmospheric and Oceanic Technology x
  • Refine by Access: All Content x
Clear All
Alain Zuber
,
Wolfgang Stremme
,
Michel Grutter
,
David K. Adams
,
Thomas Blumenstock
,
Frank Hase
,
Claudia Rivera
,
Noemie Taquet
,
Alejandro Bezanilla
, and
Eugenia González de Castillo

Abstract

Total column H2O is measured by two remote sensing techniques at the Altzomoni Atmospheric Observatory (19°12′N, 98°65′W, 4000 m above sea level), a high-altitude, tropical background site in central Mexico. A ground-based solar absorption FTIR spectrometer that is part of the Network for Detection of Atmospheric Composition Change (NDACC) is used to retrieve water vapor in three spectral regions (6074–6471, 2925–2941, and 1110–1253 cm−1) and is compared to data obtained from a global positioning system (GPS) receiver that is part of the TLALOCNet GPS-meteorological network. Strong correlations are obtained between the coincident hourly means from the three FTIR products and small relative bias and correction factors could be determined for each when compared to the more consistent GPS data. Retrievals from the 2925–2941 cm−1 spectral region have the highest correlation with GPS [coefficient of determination (R 2) = 0.998, standard deviation (STD) = 0.18 cm (78.39%), mean difference = 0.04 cm (8.33%)], although the other products are also highly correlated [R 2 ≥ 0.99, STD ≤ 0.20 cm (<90%), mean difference ≤ 0.1 cm (<24%)]. Clear-sky dry bias (CSDB) values are reduced to <10% (<0.20 cm) when coincident hourly means are used in the comparison. The use of GPS and FTIR water vapor products simultaneously leads to a more complete and better description of the diurnal and seasonal cycles of water vapor. We describe the water vapor climatology with both complementary datasets, nevertheless, pointing out the importance of considering the clear-sky dry bias arising from the large diurnal and seasonal variability of water vapor at this high-altitude tropical site.

Restricted access
Igor R. Ivić

Abstract

Simulated weather time series are often used in engineering and research practice to assess radar systems behavior and/or to evaluate the performance of novel techniques. There are two main approaches to simulating weather time series. One is based on summing individual returns from a large number of distributed weather particles to create a cumulative return. The other is aimed at creating simulated random signals based on the predetermined values of radar observables and is of interest herein. So far, several methods to simulate weather time series, using the latter approach, have been proposed. All of these methods are based on applying the inverse discrete Fourier transform to the spectral model with added random fluctuations. To meet the desired simulation accuracy, such an approach typically requires generating the number of samples that is larger than the base sample number due to the discrete Fourier transform properties. In that regard, a novel method to determine simulation length is proposed. It is based on a detailed theoretical development that demonstrates the exact source of errors incurred by this approach. Furthermore, a simple method for time series simulation that is based on the autocorrelation matrix exists. This method neither involves manipulations in the spectral domain nor requires generating the number of samples larger than the base sample number. Herein, this method is suggested for weather time series simulation and its accuracy and efficiency are analyzed and compared to the spectral-based approach.

Significance Statement

All research articles published so far on the topic of weather time series simulation propose the use of inverse discrete Fourier transform (IDFT) when based on the desired Doppler moment values. Herein, a detailed theoretical development that demonstrates the exact source of errors incurred by this approach is presented. Also, a novel method to determine the simulation length that is based on the theoretical error computation is proposed. As an alternative, a computationally efficient general method (not using IDFT) previously developed for the simulation of sequences with desired properties is suggested for weather time series simulation. It is demonstrated that the latter method produces accurate results within overall shorter computational times. Moreover, it is shown that the use of graphics processing unit (GPU), ubiquitous in modern computers, significantly reduces computational times compared to the sole use of central processing unit (CPU) for all simulation-related calculations.

Restricted access
Noureddine Semane
,
Richard Anthes
,
Jeremiah Sjoberg
,
Sean Healy
, and
Benjamin Ruston

Abstract

We compare two seemingly different methods of estimating random error statistics (uncertainties) of observations, the three-cornered hat (3CH) method and Desroziers method, and show several examples of estimated uncertainties of COSMIC-2 (C2) radio occultation (RO) observations. The two methods yield similar results, attesting to the validity of both. The small differences provide insight into the sensitivity of the methods to the assumptions and computational details. These estimates of RO error statistics differ considerably from several RO error models used by operational weather forecast centers, suggesting that the impact of RO observations on forecasts can be improved by adjusting the RO error models to agree more closely with the RO error statistics. Both methods show RO uncertainty estimates that vary with latitude. In the troposphere, uncertainties are higher in the tropics than in the subtropics and middle latitudes. In the upper stratosphere–lower mesosphere, we find the reverse, with tropical uncertainties slightly less than in the subtropics and higher latitudes. The uncertainty estimates from the two techniques also show similar variations between a 31-day period during Northern Hemisphere tropical cyclone season (16 August–15 September 2020) and a month near the vernal equinox (April 2021). Finally, we find a relationship between the vertical variation of the C2 estimated uncertainties and atmospheric variability, as measured by the standard deviation of the C2 sample. The convergence of the error estimates and the standard deviations above 40 km indicates a lessening impact of assimilating RO above this level.

Significance Statement

Uncertainties of observations are of general interest and their knowledge is important for assimilation in numerical weather prediction models. This paper compares two methods of estimating these uncertainties and shows that they give nearly identical results under certain conditions. The estimation of the COSMIC-2 bending angle uncertainties and how they compare to the assumed bending angle error models in several operational weather centers suggests that there is an opportunity for obtaining improved impact of RO observations in numerical model forecasts. Finally, the relationship between the COSMIC-2 bending angle errors and atmospheric variability provides insight into the sources of RO observational uncertainties.

Open access
G. Matthews

Abstract

Better predictions of global warming can be enabled by tuning legacy and current computer simulations to Earth radiation budget (ERB) measurements. Since the 1970s, such orbital results exist, and the next-generation instruments such as one called “Libera” are in production. Climate communities have requested that new ERB observing system missions like these have calibration accuracy obtaining significantly improved calibration SI traceability and stability. This is to prevent untracked instrument calibration drifts that could lead to false conclusions on climate change. Based on experience from previous ERB missions, the alternative concept presented here utilizes directly viewing solar calibration, for cloud-size Earth measurement resolution at <1% accuracy. However, it neglects complex already used calibration technology like solar diffusers and onboard lights, allowing new lower cost/risk unconsidered spectral characterizing concepts to be introduced for today’s technology. Also in contrast to near future ERB concepts already being produced, this enables in-flight wavelength dependent calibration of Earth-observing telescopes using direct solar views, through narrowband filters continuously characterized on-orbit.

Open access
Dudley B. Chelton
,
Roger M. Samelson
, and
J. Thomas Farrar

Abstract

The Ka-band Radar Interferometer (KaRIn) on the Surface Water and Ocean Topography (SWOT) satellite will revolutionize satellite altimetry by measuring sea surface height (SSH) with unprecedented accuracy and resolution across two 50-km swaths separated by a 20-km gap. The original plan to provide an SSH product with a footprint diameter of 1 km has changed to providing two SSH data products with footprint diameters of 0.5 and 2 km. The swath-averaged standard deviations and wavenumber spectra of the uncorrelated measurement errors for these footprints are derived from the SWOT science requirements that are expressed in terms of the wavenumber spectrum of SSH after smoothing with a filter cutoff wavelength of 15 km. The availability of two-dimensional fields of SSH within the measurement swaths will provide the first spaceborne estimates of instantaneous surface velocity and vorticity through the geostrophic equations. The swath-averaged standard deviations of the noise in estimates of velocity and vorticity derived by propagation of the uncorrelated SSH measurement noise through the finite difference approximations of the derivatives are shown to be too large for the SWOT data products to be used directly in most applications, even for the coarsest footprint diameter of 2 km. It is shown from wavenumber spectra and maps constructed from simulated SWOT data that additional smoothing will be required for most applications of SWOT estimates of velocity and vorticity. Equations are presented for the swath-averaged standard deviations and wavenumber spectra of residual noise in SSH and geostrophically computed velocity and vorticity after isotropic two-dimensional smoothing for any user-defined smoother and filter cutoff wavelength of the smoothing.

Open access
Guillaume Dodet
,
Saleh Abdalla
,
Matias Alday
,
Mickaël Accensi
,
Jean Bidlot
, and
Fabrice Ardhuin

Abstract

Ocean wave measurements are of major importance for a number of applications including climate studies, ship routing, marine engineering, safety at sea, and coastal risk management. Depending on the scales and regions of interest, a variety of data sources may be considered (e.g., in situ data, Voluntary Observing Ship observations, altimeter records, numerical wave models), each one with its own characteristics in terms of sampling frequency, spatial coverage, accuracy, and cost. To combine multiple source of wave information (e.g., for data assimilation scheme in numerical weather prediction models), the error characteristics of each measurement system need to be defined. In this study, we use the triple collocation technique to estimate the random error variance of significant wave heights from a comprehensive collection of collocated in situ, altimeter, and model data. The in situ dataset is a selection of 122 platforms provided by the Copernicus Marine Service In Situ Thematic Center. The altimeter dataset is the ESA Sea State CCI version1 L2P product. The model dataset is the WW3-LOPS hindcast forced with bias-corrected ERA5 winds and an adjusted T475 parameterization of wave generation and dissipation. Compared to previous similar analyses, the extensive (∼250 000 entries) triple collocation dataset generated for this study provides some new insights on the error variability associated to differences in in situ platforms, satellite missions, sea state conditions, and seasonal variability.

Restricted access
Luke Kachelein
,
Bruce D. Cornuelle
,
Sarah T. Gille
, and
Matthew R. Mazloff

Abstract

A novel tidal analysis package (red_tide) has been developed to characterize low-amplitude non-phase-locked tidal energy and dominant tidal peaks in noisy, irregularly sampled, or gap-prone time series. We recover tidal information by expanding conventional harmonic analysis to include prior information and assumptions about the statistics of a process, such as the assumption of a spectrally colored background, treated as nontidal noise. This is implemented using Bayesian maximum posterior estimation and assuming Gaussian prior distributions. We utilize a hierarchy of test cases, including synthetic data and observations, to evaluate this method and its relevance to analysis of data with a tidal component and an energetic nontidal background. Analysis of synthetic test cases shows that the methodology provides robust tidal estimates. When the background energy spectrum is nearly spectrally white, red_tide results replicate results from ordinary least squares (OLS) commonly used in other tidal packages. When background spectra are red (a spectral slope of −2 or steeper), red_tide’s estimates represent a measurable improvement over OLS. The approach highlights the presence of tidal variability and low-amplitude constituents in observations by allowing arbitrarily configurable fitted frequencies and prior statistics that constrain solutions. These techniques have been implemented in MATLAB in order to analyze tidal data with non-phase-locked components and an energetic background that pose challenges to the commonly used OLS approach.

Open access
Corey K. Potvin
,
Burkely T. Gallo
,
Anthony E. Reinhart
,
Brett Roberts
,
Patrick S. Skinner
,
Ryan A. Sobash
,
Katie A. Wilson
,
Kelsey C. Britt
,
Chris Broyles
,
Montgomery L. Flora
,
William J. S. Miller
, and
Clarice N. Satrio

Abstract

Thunderstorm mode strongly impacts the likelihood and predictability of tornadoes and other hazards, and thus is of great interest to severe weather forecasters and researchers. It is often impossible for a forecaster to manually classify all the storms within convection-allowing model (CAM) output during a severe weather outbreak, or for a scientist to manually classify all storms in a large CAM or radar dataset in a timely manner. Automated storm classification techniques facilitate these tasks and provide objective inputs to operational tools, including machine learning models for predicting thunderstorm hazards. Accurate storm classification, however, requires accurate storm segmentation. Many storm segmentation techniques fail to distinguish between clustered storms, thereby missing intense cells, or to identify cells embedded within quasi-linear convective systems that can produce tornadoes and damaging winds. Therefore, we have developed an iterative technique that identifies these constituent storms in addition to traditionally identified storms. Identified storms are classified according to a seven-mode scheme designed for severe weather operations and research. The classification model is a hand-developed decision tree that operates on storm properties computed from composite reflectivity and midlevel rotation fields. These properties include geometrical attributes, whether the storm contains smaller storms or resides within a larger-scale complex, and whether strong rotation exists near the storm centroid. We evaluate the classification algorithm using expert labels of 400 storms simulated by the NSSL Warn-on-Forecast System or analyzed by the NSSL Multi-Radar/Multi-Sensor product suite. The classification algorithm emulates expert opinion reasonably well (e.g., 76% accuracy for supercells), and therefore could facilitate a wide range of operational and research applications.

Significance Statement

We have developed a new technique for automatically identifying intense thunderstorms in model and radar data and classifying storm mode, which informs forecasters about the risks of tornadoes and other high-impact weather. The technique identifies storms that are often missed by other methods, including cells embedded within storm clusters, and successfully classifies important storm modes that are generally not included in other schemes, such as rotating cells embedded within quasi-linear convective systems. We hope the technique will facilitate a variety of forecasting and research efforts.

Restricted access
Free access
Boyan Hu
,
Jinfeng Ding
,
Gang Liu
, and
Jianping Tang

Abstract

This study analyzes the spatial and temporal distribution characteristics of the in situ aircraft observations in the middle and higher troposphere in 2019. These aircraft observations are mainly distributed in China, and relatively evenly recorded between 0000 and 1500 UTC in time and 6 and 10 km in height. Based on the 3395 stronger clear-air turbulence (CAT) events and 4038 weaker CAT events selected from the observations in the study region (15°–55°N, 70°–140°E), the performances of 24 CAT diagnostics calculated from the ERA5 data are evaluated. Results show that the diagnostics connected with vertical wind shear (i.e., version 1 of the North Carolina State University index, negative Richardson number, variant 3 and variant 1 of Ellrod’s turbulence index) have the best performances. However, the performances vary greatly from season to season, and overall performances are the best in winter and worst in summer. The annual and seasonal best thresholds for these diagnostics are also listed in this study.

Restricted access