Search Results

You are looking at 1 - 10 of 26 items for

  • Author or Editor: Christopher Curtis x
  • Refine by Access: All Content x
Clear All Modify Search
Christopher D. Curtis

Abstract

Time series simulation is an important tool for developing and testing new signal processing algorithms for weather radar. The methods for simulating time series data have not changed much over the last few decades, but recent advances in computing technology call for new methods. It would seem that faster computers would make better-performing simulators less necessary, but improved technology has made comprehensive, multiday simulations feasible. Even a relatively minor performance improvement can significantly shorten the time of one of these multiday simulations. Current simulators can also be inaccurate for some sets of parameters, especially narrow spectrum widths. In this paper, three new modifications to the conventional simulators are introduced to improve accuracy and performance. Two of the modifications use thresholds to optimize both the total number of samples and the number of random variates that need to be simulated. The third modification uses an alternative method for implementing the inverse Fourier transform. These new modifications lead to fast versions of the simulators that accurately match the desired autocorrelation and spectrum for a wide variety of signal parameters. Additional recommendations for using single-precision values and graphical processing units are also suggested.

Full access
Sebastián M. Torres and Christopher D. Curtis

Abstract

For weather radars, range-oversampling processing was proposed as an effective way either to reduce the variance of radar-variable estimates without increasing scan times or to reduce scan times without increasing the variance of estimates. Range oversampling entails acquiring the received signals at a rate L times as fast as the reciprocal of the pulse width (the conventional rate), where L is referred to as the range-oversampling factor. To accommodate the L-times-as-fast sampling, the original formulation of range-oversampling processing required a receiver filter with a bandwidth L times as wide as that of the matched filter (the conventional receiver filter). In this case, the noise at the output of the receiver filter can still be assumed to be white, resulting in a simplified formulation of the technique but also, and more important, in a more difficult practical implementation since the receiver filter in operational weather radars is typically matched to the transmitted pulse. In this work, we revisit the role of the receiver filter in the performance of range-oversampling processing and show that using a receiver matched filter not only facilitates the implementation of range-oversampling processing but also results in the lowest variance of radar-variable estimates.

Free access
Christopher D. Curtis and Sebastián M. Torres

Abstract

As range-oversampling processing has become more practical for weather radars, implementation issues have become important to ensure the best possible performance. For example, all of the linear transformations that have been utilized for range-oversampling processing directly depend on the normalized range correlation matrix. Hence, accurately measuring the correlation in range time is essential to avoid reflectivity biases and to ensure the expected variance reduction. Although the range correlation should be relatively stable over time, hardware changes and drift due to changing environmental conditions can have measurable effects on the modified pulse. To reliably track changes in the range correlation, an automated real-time method is needed that does not interfere with normal data collection. A method is proposed that uses range-oversampled data from operational radar scans and that works with radar returns from both weather and ground clutter. In this paper, the method is described, tested using simulations, and validated with time series data.

Full access
Sebastián M. Torres and Christopher D. Curtis

Abstract

The range-weighting function (RWF) determines how individual scatterer contributions are weighted as a function of range to produce the meteorological data associated with a single resolution volume. The RWF is commonly defined in terms of the transmitter pulse envelope and the receiver filter impulse response, and it determines the radar range resolution. However, the effective RWF also depends on the range-time processing involved in producing estimates of meteorological variables. This is a third contributor to the RWF that has become more significant in recent years as advanced range-time processing techniques have become feasible for real-time implementation on modern radar systems. In this work, a new formulation of the RWF for weather radars that incorporates the impact of signal processing is proposed. Following the derivation based on a general signal processing model, typical scenarios are used to illustrate the variety of RWFs that can result from different range-time signal processing techniques. Finally, the RWF is used to measure range resolution and the range correlation of meteorological data.

Full access
Sebastián M. Torres and Christopher D. Curtis

Abstract

WSR-88D superresolution data are produced with finer range and azimuth sampling and improved azimuthal resolution as a result of a narrower effective antenna beamwidth. These characteristics afford improved detectability of weaker and more distant tornadoes by providing an enhancement of the tornadic vortex signature, which is characterized by a large low-level azimuthal Doppler velocity difference. The effective-beamwidth reduction in superresolution data is achieved by applying a tapered data window to the samples in the dwell time; thus, it comes at the expense of increased variances for all radar-variable estimates. One way to overcome this detrimental effect is through the use of range oversampling processing, which has the potential to reduce the variance of superresolution data to match that of legacy-resolution data without increasing the acquisition time. However, range-oversampling processing typically broadens the radar range weighting function and thus degrades the range resolution. In this work, simulated Doppler velocities for vortexlike fields are used to quantify the effects of range-oversampling processing on the velocity signature of tornadoes when using WSR-88D superresolution data. The analysis shows that the benefits of range-oversampling processing in terms of improved data quality should outweigh the relatively small degradation to the range resolution and thus contribute to the tornado warning decision process by improving forecaster confidence in the radar data.

Full access
Feng Nai, Sebastián Torres, and Christopher Curtis

Abstract

Severe thunderstorms and their associated tornadoes pose significant threats to life and property, and using radar data to accurately measure the rotational velocity of circulations in thunderstorms is essential for appropriate, timely warnings. One key factor in accurately measuring circulation velocity is the azimuthal spacing between radar data points, which is referred to as the azimuthal sampling interval. Previous studies have shown that reducing the azimuthal sampling interval can aid in measuring circulation velocity; however, this comes at the price of increased computational complexity. Thus, choosing the best compromise requires knowledge of the relationship between the radar azimuthal sampling interval and the accuracy of the circulation strength as measured from the radar data. In this work, we use simulations to quantify the impact of azimuthal sampling on the strength of radar-observed circulations and show that the improvements get progressively smaller as the azimuthal sampling interval decreases. Thus, improved characterization of circulations can be achieved without using the finest possible sampling grid. We use real data to validate the results of the simulations, which can be used to inform the selection of an appropriate azimuthal sampling interval that balances the accuracy of the radar-observed circulations and computational complexity.

Free access
Christopher D. Curtis and Sebastián M. Torres

Abstract

One way to reduce the variance of meteorological-variable estimates on weather radars without increasing dwell times is by using range oversampling techniques. Such techniques could significantly improve the estimation of polarimetric variables, which typically require longer dwell times to achieve the desired data quality compared to the single-polarization spectral moments. In this paper, an efficient implementation of adaptive pseudowhitening that was developed for single-polarization radars is extended for dual polarization. Adaptive pseudowhitening maintains the performance of pure whitening at high signal-to-noise ratios and equals or outperforms the digital matched filter at low signal-to-noise ratios. This approach results in improvements for polarimetric-variable estimates that are consistent with the improvements for spectral-moment estimates described in previous work. The performance of the proposed technique is quantified using simulations that show that the variance of polarimetric-variable estimates can be reduced without modifying the scanning strategies. The proposed technique is applied to real weather data to validate the expected improvements that can be realized operationally.

Full access
Christopher D. Curtis and Sebastián M. Torres

Abstract

This paper describes a real-time implementation of adaptive range oversampling processing on the National Weather Radar Testbed phased-array radar. It is demonstrated that, compared to conventional matched-filter processing, range oversampling can be used to reduce scan update times by a factor of 2 while producing meteorological data with similar quality. Adaptive range oversampling uses moment-specific transformations to minimize the variance of meteorological variable estimates. An efficient algorithm is introduced that allows for seamless integration with other signal processing functions and reduces the computational burden. Through signal processing, a new dimension is added to the traditional trade-off triangle that includes the variance of estimates, spatial coverage, and update time. That is, by trading an increase in computational complexity, data with higher temporal resolution can be collected and the variance of estimates can be improved without affecting the spatial coverage.

Full access
Sebastián M. Torres and Christopher D. Curtis

Abstract

A fundamental assumption for the application of range-oversampling techniques is that the correlation of oversampled signals in range is accurately known. In this paper, a theoretical framework is derived to quantify the effects of inaccurate range correlation measurements on the performance of such techniques, which include digital matched filtering and those based on decorrelation (whitening) transformations. It is demonstrated that significant reflectivity biases and increased variance of estimates can occur if the range correlation is not accurately measured. Simulations and real data are used to validate the theoretical results and to illustrate the detrimental effects of mismeasurements. Results from this work underline the need for reliable calibration in the context of range-oversampling processing, and they can be used to establish appropriate accuracy requirements for the measurement of the range correlation on modern weather radars.

Full access
Christopher D. Curtis and Sebastián M. Torres

Abstract

Range-oversampling processing is a technique that can be used to lower the variance of radar-variable estimates, reduce radar update times, or a mixture of both. There are two main assumptions for using range-oversampling processing: accurate knowledge of the range correlation and uniform reflectivity in the radar resolution volume. The first assumption has been addressed in previous research; this work focuses on the uniform reflectivity assumption. Earlier research shows that significant reflectivity gradients can occur in storms; we utilized those results to develop realistic simulations of radar returns that include effects of reflectivity gradients in range. An important consideration when using range-oversampling processing is the resulting change in the range weighting function. The range weighting function can change for different types of range-oversampling processing, and some techniques, such as adaptive pseudowhitening, can lead to different range weighting functions at each range gate. To quantify the possible effects of differing range weighting functions in the presence of reflectivity gradients, we developed simulations to examine varying types of range-oversampling processing with two receiver filters: a matched receiver filter and a wider-bandwidth receiver filter (as recommended for use with range oversampling). Simulation results show that differences in range weighting functions are the only contributor to differences in radar reflectivity measurements. Results from real weather data demonstrate that the reflectivity gradients that occur in typical severe storms do not cause significant changes in reflectivity measurements and that the benefits from range-oversampling processing outweigh the possible isolated effects from large reflectivity gradients.

Restricted access