Search Results
Abstract
Range-oversampling processing is a technique that can be used to lower the variance of radar-variable estimates, reduce radar update times, or a mixture of both. There are two main assumptions for using range-oversampling processing: accurate knowledge of the range correlation and uniform reflectivity in the radar resolution volume. The first assumption has been addressed in previous research; this work focuses on the uniform reflectivity assumption. Earlier research shows that significant reflectivity gradients can occur in storms; we utilized those results to develop realistic simulations of radar returns that include effects of reflectivity gradients in range. An important consideration when using range-oversampling processing is the resulting change in the range weighting function. The range weighting function can change for different types of range-oversampling processing, and some techniques, such as adaptive pseudowhitening, can lead to different range weighting functions at each range gate. To quantify the possible effects of differing range weighting functions in the presence of reflectivity gradients, we developed simulations to examine varying types of range-oversampling processing with two receiver filters: a matched receiver filter and a wider-bandwidth receiver filter (as recommended for use with range oversampling). Simulation results show that differences in range weighting functions are the only contributor to differences in radar reflectivity measurements. Results from real weather data demonstrate that the reflectivity gradients that occur in typical severe storms do not cause significant changes in reflectivity measurements and that the benefits from range-oversampling processing outweigh the possible isolated effects from large reflectivity gradients.
Abstract
Range-oversampling processing is a technique that can be used to lower the variance of radar-variable estimates, reduce radar update times, or a mixture of both. There are two main assumptions for using range-oversampling processing: accurate knowledge of the range correlation and uniform reflectivity in the radar resolution volume. The first assumption has been addressed in previous research; this work focuses on the uniform reflectivity assumption. Earlier research shows that significant reflectivity gradients can occur in storms; we utilized those results to develop realistic simulations of radar returns that include effects of reflectivity gradients in range. An important consideration when using range-oversampling processing is the resulting change in the range weighting function. The range weighting function can change for different types of range-oversampling processing, and some techniques, such as adaptive pseudowhitening, can lead to different range weighting functions at each range gate. To quantify the possible effects of differing range weighting functions in the presence of reflectivity gradients, we developed simulations to examine varying types of range-oversampling processing with two receiver filters: a matched receiver filter and a wider-bandwidth receiver filter (as recommended for use with range oversampling). Simulation results show that differences in range weighting functions are the only contributor to differences in radar reflectivity measurements. Results from real weather data demonstrate that the reflectivity gradients that occur in typical severe storms do not cause significant changes in reflectivity measurements and that the benefits from range-oversampling processing outweigh the possible isolated effects from large reflectivity gradients.
Abstract
For weather radars, range-oversampling processing was proposed as an effective way either to reduce the variance of radar-variable estimates without increasing scan times or to reduce scan times without increasing the variance of estimates. Range oversampling entails acquiring the received signals at a rate L times as fast as the reciprocal of the pulse width (the conventional rate), where L is referred to as the range-oversampling factor. To accommodate the L-times-as-fast sampling, the original formulation of range-oversampling processing required a receiver filter with a bandwidth L times as wide as that of the matched filter (the conventional receiver filter). In this case, the noise at the output of the receiver filter can still be assumed to be white, resulting in a simplified formulation of the technique but also, and more important, in a more difficult practical implementation since the receiver filter in operational weather radars is typically matched to the transmitted pulse. In this work, we revisit the role of the receiver filter in the performance of range-oversampling processing and show that using a receiver matched filter not only facilitates the implementation of range-oversampling processing but also results in the lowest variance of radar-variable estimates.
Abstract
For weather radars, range-oversampling processing was proposed as an effective way either to reduce the variance of radar-variable estimates without increasing scan times or to reduce scan times without increasing the variance of estimates. Range oversampling entails acquiring the received signals at a rate L times as fast as the reciprocal of the pulse width (the conventional rate), where L is referred to as the range-oversampling factor. To accommodate the L-times-as-fast sampling, the original formulation of range-oversampling processing required a receiver filter with a bandwidth L times as wide as that of the matched filter (the conventional receiver filter). In this case, the noise at the output of the receiver filter can still be assumed to be white, resulting in a simplified formulation of the technique but also, and more important, in a more difficult practical implementation since the receiver filter in operational weather radars is typically matched to the transmitted pulse. In this work, we revisit the role of the receiver filter in the performance of range-oversampling processing and show that using a receiver matched filter not only facilitates the implementation of range-oversampling processing but also results in the lowest variance of radar-variable estimates.
Abstract
We propose a simulation framework that can be used to design and evaluate the performance of adaptive scanning algorithms on different phased-array weather radar designs. The simulator is proposed as tool to 1) compare the performance of different adaptive scanning algorithms on the same weather event, 2) evaluate the performance of a given adaptive scanning algorithm on several weather events, and 3) evaluate the performance of a given adaptive scanning algorithm on a given weather event using different radar designs. We illustrate the capabilities of the proposed framework to design and evaluate the performance of adaptive algorithms aimed at reducing the update time using adaptive scanning. The example concept of operations is based on a fast low-fidelity surveillance scan and a high-fidelity adaptive scan. The flexibility of the proposed simulation framework is tested using two phased-array-radar designs and three complementary adaptive scanning algorithms: focused observations, beam clustering, and dwell tailoring. Based on a significant weather event observed by an operational NEXRAD radar, our experimental results consist of radar data that were simulated as if the same event had been observed by arbitrary combinations of radar systems and adaptive scanning configurations. Results show that simulated fields of radar data capture the main data-quality impacts from the use of adaptive scanning and can be used to obtain quantitative metrics and for qualitative comparison and evaluation by forecasters. That is, the proposed simulator could provide an effective interface with meteorologists and could support the development of concepts of operations that are based on adaptive scanning to meet the evolutionary observational needs of the U.S. National Weather Service.
Abstract
We propose a simulation framework that can be used to design and evaluate the performance of adaptive scanning algorithms on different phased-array weather radar designs. The simulator is proposed as tool to 1) compare the performance of different adaptive scanning algorithms on the same weather event, 2) evaluate the performance of a given adaptive scanning algorithm on several weather events, and 3) evaluate the performance of a given adaptive scanning algorithm on a given weather event using different radar designs. We illustrate the capabilities of the proposed framework to design and evaluate the performance of adaptive algorithms aimed at reducing the update time using adaptive scanning. The example concept of operations is based on a fast low-fidelity surveillance scan and a high-fidelity adaptive scan. The flexibility of the proposed simulation framework is tested using two phased-array-radar designs and three complementary adaptive scanning algorithms: focused observations, beam clustering, and dwell tailoring. Based on a significant weather event observed by an operational NEXRAD radar, our experimental results consist of radar data that were simulated as if the same event had been observed by arbitrary combinations of radar systems and adaptive scanning configurations. Results show that simulated fields of radar data capture the main data-quality impacts from the use of adaptive scanning and can be used to obtain quantitative metrics and for qualitative comparison and evaluation by forecasters. That is, the proposed simulator could provide an effective interface with meteorologists and could support the development of concepts of operations that are based on adaptive scanning to meet the evolutionary observational needs of the U.S. National Weather Service.
Abstract
WSR-88D superresolution data are produced with finer range and azimuth sampling and improved azimuthal resolution as a result of a narrower effective antenna beamwidth. These characteristics afford improved detectability of weaker and more distant tornadoes by providing an enhancement of the tornadic vortex signature, which is characterized by a large low-level azimuthal Doppler velocity difference. The effective-beamwidth reduction in superresolution data is achieved by applying a tapered data window to the samples in the dwell time; thus, it comes at the expense of increased variances for all radar-variable estimates. One way to overcome this detrimental effect is through the use of range oversampling processing, which has the potential to reduce the variance of superresolution data to match that of legacy-resolution data without increasing the acquisition time. However, range-oversampling processing typically broadens the radar range weighting function and thus degrades the range resolution. In this work, simulated Doppler velocities for vortexlike fields are used to quantify the effects of range-oversampling processing on the velocity signature of tornadoes when using WSR-88D superresolution data. The analysis shows that the benefits of range-oversampling processing in terms of improved data quality should outweigh the relatively small degradation to the range resolution and thus contribute to the tornado warning decision process by improving forecaster confidence in the radar data.
Abstract
WSR-88D superresolution data are produced with finer range and azimuth sampling and improved azimuthal resolution as a result of a narrower effective antenna beamwidth. These characteristics afford improved detectability of weaker and more distant tornadoes by providing an enhancement of the tornadic vortex signature, which is characterized by a large low-level azimuthal Doppler velocity difference. The effective-beamwidth reduction in superresolution data is achieved by applying a tapered data window to the samples in the dwell time; thus, it comes at the expense of increased variances for all radar-variable estimates. One way to overcome this detrimental effect is through the use of range oversampling processing, which has the potential to reduce the variance of superresolution data to match that of legacy-resolution data without increasing the acquisition time. However, range-oversampling processing typically broadens the radar range weighting function and thus degrades the range resolution. In this work, simulated Doppler velocities for vortexlike fields are used to quantify the effects of range-oversampling processing on the velocity signature of tornadoes when using WSR-88D superresolution data. The analysis shows that the benefits of range-oversampling processing in terms of improved data quality should outweigh the relatively small degradation to the range resolution and thus contribute to the tornado warning decision process by improving forecaster confidence in the radar data.
Abstract
The staggered–pulse repetition time (SPRT) technique has been shown to be effective at mitigating range and velocity ambiguities; however, mitigation of ground clutter contamination for SPRT sequences has proven to be more challenging. Using the properties of the autocorrelation spectral density, the Clutter Environment Analysis using Adaptive Processing (CLEAN-AP) filter is extended to SPRT sequences for its implementation on the U.S. Next Generation Weather Radar (NEXRAD) network. The performance of the CLEAN-AP filter for SPRT sequences is characterized and illustrated with simulations and real data. The study shows that the proposed ground clutter filter meets NEXRAD operational performance requirements for ground clutter mitigation.
Abstract
The staggered–pulse repetition time (SPRT) technique has been shown to be effective at mitigating range and velocity ambiguities; however, mitigation of ground clutter contamination for SPRT sequences has proven to be more challenging. Using the properties of the autocorrelation spectral density, the Clutter Environment Analysis using Adaptive Processing (CLEAN-AP) filter is extended to SPRT sequences for its implementation on the U.S. Next Generation Weather Radar (NEXRAD) network. The performance of the CLEAN-AP filter for SPRT sequences is characterized and illustrated with simulations and real data. The study shows that the proposed ground clutter filter meets NEXRAD operational performance requirements for ground clutter mitigation.
Abstract
Adaptive range oversampling processing can be used either to reduce the variance of radar-variable estimates without increasing scan times or to reduce scan times without increasing the variance of estimates. For example, an implementation of adaptive pseudowhitening on the National Weather Radar Testbed Phased-Array Radar (NWRT PAR) led to a twofold reduction in scan times. Conversely, a proposed implementation of adaptive pseudowhitening the U.S. Next Generation Weather Radar (NEXRAD) network would reduce the variance of dual-polarization estimates while keeping current scan times. However, the original version of adaptive pseudowhitening is not compatible with radar-variable estimators for which an explicit variance expression is not readily available. One such nontraditional estimator is the hybrid spectrum-width estimator, which is currently used in the NEXRAD network. In this paper, an extension of adaptive pseudowhitening is proposed that utilizes lookup tables (rather than analytical solutions based on explicit variance expressions) to obtain range oversampling transformations. After describing this lookup table (LUT) adaptive pseudowhitening technique, its performance is compared to that of the original version of adaptive pseudowhitening using traditional radar-variable estimators. LUT adaptive pseudowhitening is then applied to the hybrid spectrum-width estimator, and simulation results are confirmed with a qualitative analysis of radar data.
Abstract
Adaptive range oversampling processing can be used either to reduce the variance of radar-variable estimates without increasing scan times or to reduce scan times without increasing the variance of estimates. For example, an implementation of adaptive pseudowhitening on the National Weather Radar Testbed Phased-Array Radar (NWRT PAR) led to a twofold reduction in scan times. Conversely, a proposed implementation of adaptive pseudowhitening the U.S. Next Generation Weather Radar (NEXRAD) network would reduce the variance of dual-polarization estimates while keeping current scan times. However, the original version of adaptive pseudowhitening is not compatible with radar-variable estimators for which an explicit variance expression is not readily available. One such nontraditional estimator is the hybrid spectrum-width estimator, which is currently used in the NEXRAD network. In this paper, an extension of adaptive pseudowhitening is proposed that utilizes lookup tables (rather than analytical solutions based on explicit variance expressions) to obtain range oversampling transformations. After describing this lookup table (LUT) adaptive pseudowhitening technique, its performance is compared to that of the original version of adaptive pseudowhitening using traditional radar-variable estimators. LUT adaptive pseudowhitening is then applied to the hybrid spectrum-width estimator, and simulation results are confirmed with a qualitative analysis of radar data.
Abstract
The autocorrelation spectral density (ASD) was introduced as a generalization of the classical periodogram-based power spectral density (PSD) and as an alternative tool for spectral analysis of uniformly sampled weather radar signals. In this paper, the ASD is applied to staggered pulse repetition time (PRT) sequences and is related to both the PSD and the ASD of the underlying uniform-PRT sequence. An unbiased autocorrelation estimator based on the ASD is introduced for use with staggered-PRT sequences when spectral processing is required. Finally, the strengths and limitations of the ASD for spectral analysis of staggered-PRT sequences are illustrated using simulated and real data.
Abstract
The autocorrelation spectral density (ASD) was introduced as a generalization of the classical periodogram-based power spectral density (PSD) and as an alternative tool for spectral analysis of uniformly sampled weather radar signals. In this paper, the ASD is applied to staggered pulse repetition time (PRT) sequences and is related to both the PSD and the ASD of the underlying uniform-PRT sequence. An unbiased autocorrelation estimator based on the ASD is introduced for use with staggered-PRT sequences when spectral processing is required. Finally, the strengths and limitations of the ASD for spectral analysis of staggered-PRT sequences are illustrated using simulated and real data.
Abstract
A fundamental assumption for the application of range-oversampling techniques is that the correlation of oversampled signals in range is accurately known. In this paper, a theoretical framework is derived to quantify the effects of inaccurate range correlation measurements on the performance of such techniques, which include digital matched filtering and those based on decorrelation (whitening) transformations. It is demonstrated that significant reflectivity biases and increased variance of estimates can occur if the range correlation is not accurately measured. Simulations and real data are used to validate the theoretical results and to illustrate the detrimental effects of mismeasurements. Results from this work underline the need for reliable calibration in the context of range-oversampling processing, and they can be used to establish appropriate accuracy requirements for the measurement of the range correlation on modern weather radars.
Abstract
A fundamental assumption for the application of range-oversampling techniques is that the correlation of oversampled signals in range is accurately known. In this paper, a theoretical framework is derived to quantify the effects of inaccurate range correlation measurements on the performance of such techniques, which include digital matched filtering and those based on decorrelation (whitening) transformations. It is demonstrated that significant reflectivity biases and increased variance of estimates can occur if the range correlation is not accurately measured. Simulations and real data are used to validate the theoretical results and to illustrate the detrimental effects of mismeasurements. Results from this work underline the need for reliable calibration in the context of range-oversampling processing, and they can be used to establish appropriate accuracy requirements for the measurement of the range correlation on modern weather radars.
Abstract
A method for estimation of spectral moments on pulsed weather radars is presented. This scheme operates on oversampled echoes in range; that is, samples of in-phase and quadrature-phase components are collected at a rate several times larger than the reciprocal of the transmitted pulse length. The spectral moments are estimated by suitably combining weighted averages of these oversampled signals in range with usual processing of samples (spaced at the pulse repetition time) at a fixed range location. The weights in range are derived from a whitening transformation; hence, the oversampled signals become uncorrelated and, consequently, the variance of the estimates decreases significantly. Because the estimate errors are inversely proportional to the volume scanning times, it follows that storms can be surveyed much faster than is possible with current processing methods, or equivalently, for the current volume scanning time, accuracy of the estimates can be greatly improved. This significant improvement is achievable at large signal-to-noise ratios.
Abstract
A method for estimation of spectral moments on pulsed weather radars is presented. This scheme operates on oversampled echoes in range; that is, samples of in-phase and quadrature-phase components are collected at a rate several times larger than the reciprocal of the transmitted pulse length. The spectral moments are estimated by suitably combining weighted averages of these oversampled signals in range with usual processing of samples (spaced at the pulse repetition time) at a fixed range location. The weights in range are derived from a whitening transformation; hence, the oversampled signals become uncorrelated and, consequently, the variance of the estimates decreases significantly. Because the estimate errors are inversely proportional to the volume scanning times, it follows that storms can be surveyed much faster than is possible with current processing methods, or equivalently, for the current volume scanning time, accuracy of the estimates can be greatly improved. This significant improvement is achievable at large signal-to-noise ratios.
Abstract
A method to reduce errors in estimates of polarimetric variables beyond those achievable with standard estimators is suggested. It consists of oversampling echo signals in range, applying linear transformations to decorrelate these samples, processing in time the sequences at fixed range locations to obtain various second-order moments, averaging in range these moments, and, finally, combining them into polarimetric variables. The polarimetric variables considered are differential reflectivity, differential phase, and the copolar correlation coefficient between the horizontally and vertically polarized echoes. Simulations and analytical formulas confirm a reduction in variance proportional to the number of samples within the pulse compared to standard processing of signals behind a matched filter. This reduction is possible, however, if the signal-to-noise ratios (SNRs) are larger than a critical value. Plots of the critical SNRs for various estimates as functions of Doppler spectrum width and other parameters are provided.
Abstract
A method to reduce errors in estimates of polarimetric variables beyond those achievable with standard estimators is suggested. It consists of oversampling echo signals in range, applying linear transformations to decorrelate these samples, processing in time the sequences at fixed range locations to obtain various second-order moments, averaging in range these moments, and, finally, combining them into polarimetric variables. The polarimetric variables considered are differential reflectivity, differential phase, and the copolar correlation coefficient between the horizontally and vertically polarized echoes. Simulations and analytical formulas confirm a reduction in variance proportional to the number of samples within the pulse compared to standard processing of signals behind a matched filter. This reduction is possible, however, if the signal-to-noise ratios (SNRs) are larger than a critical value. Plots of the critical SNRs for various estimates as functions of Doppler spectrum width and other parameters are provided.