# Browse

## Abstract

There is a demand for noncontact, high accuracy, high spatial resolution, wireless sensing technologies for various water level and coastal monitoring applications. This paper presents a low-cost, compact, easily configurable interferometry radar for noncontact water level monitoring, including its hardware design, signal processing algorithms, and wireless communication strategies. Interferometry radar measures distance by comparing the phase lag between reflected and transmitted signals. Water level measurements using this approach have been demonstrated in a solitary wave laboratory experiment, a field deployment observing wave run-up near Ponte Vedra, FL, and a field deployment observing waves and tides in the Sparkill Creek located in Piermont, NY. The experimental results from the radars with millimeter level accuracy have been compared with reference sensors and demonstrate the potential of continuous wave radar for water level observations.

## Abstract

There is a demand for noncontact, high accuracy, high spatial resolution, wireless sensing technologies for various water level and coastal monitoring applications. This paper presents a low-cost, compact, easily configurable interferometry radar for noncontact water level monitoring, including its hardware design, signal processing algorithms, and wireless communication strategies. Interferometry radar measures distance by comparing the phase lag between reflected and transmitted signals. Water level measurements using this approach have been demonstrated in a solitary wave laboratory experiment, a field deployment observing wave run-up near Ponte Vedra, FL, and a field deployment observing waves and tides in the Sparkill Creek located in Piermont, NY. The experimental results from the radars with millimeter level accuracy have been compared with reference sensors and demonstrate the potential of continuous wave radar for water level observations.

## Abstract

Better predictions of global warming can be enabled by tuning legacy and current computer simulations to Earth Radiation Budget (ERB) measurements. Since the 1970’s, such orbital results exist, and the next generation instruments such as one called “Libera” are in production. Climate communities have requested that new ERB observing system missions like these, have calibration accuracy obtaining significantly improved calibration SI traceability and stability. This is to prevent untracked instrument calibration drifts, that could lead to false conclusions on climate change. Based on experience from previous ERB missions, the alternative concept presented here utilizes directly viewing solar calibration, for cloud size Earth measurement resolution at <1% accuracy. However it neglects complex already used calibration technology like solar diffusers and on-board lights, allowing new lower cost/risk un-considered spectral characterizing concepts to be introduced for today’s technology. Also in contrast to near future ERB concepts already being produced, this enables in-flight wavelength dependent calibration of Earth observing telescopes using direct solar views, through narrow-band filters continuously characterized on-orbit.

## Abstract

Better predictions of global warming can be enabled by tuning legacy and current computer simulations to Earth Radiation Budget (ERB) measurements. Since the 1970’s, such orbital results exist, and the next generation instruments such as one called “Libera” are in production. Climate communities have requested that new ERB observing system missions like these, have calibration accuracy obtaining significantly improved calibration SI traceability and stability. This is to prevent untracked instrument calibration drifts, that could lead to false conclusions on climate change. Based on experience from previous ERB missions, the alternative concept presented here utilizes directly viewing solar calibration, for cloud size Earth measurement resolution at <1% accuracy. However it neglects complex already used calibration technology like solar diffusers and on-board lights, allowing new lower cost/risk un-considered spectral characterizing concepts to be introduced for today’s technology. Also in contrast to near future ERB concepts already being produced, this enables in-flight wavelength dependent calibration of Earth observing telescopes using direct solar views, through narrow-band filters continuously characterized on-orbit.

## Abstract

This paper investigates the limitation in calculating the vertical wavelength of downward phase propagating gravity waves from the vertical fluctuation of idealized radiosonde balloons in a homogeneous background environment. The wave signals are artificially observed by an idealized weather balloon with a constant ascent rate. The apparent vertical wavelengths obtained from the moving radiosonde balloon are compared to the true vertical wavelength obtained from the dispersion relation, both in the no-wind case and in the constant zonal flow case. The node method and FFT method are employed to calculate the apparent vertical wavelength from the sounding profile. The difference between the node apparent vertical wavelength and the true vertical wavelength is attributed to the fact that the ascent rate of the balloon and the downward phase speed induce a strong Doppler-shifting bias on the apparent vertical wavelength from the observation records. The difference between the FFT apparent vertical wavelength and the true vertical wavelength includes both the Doppler-shifting bias and the mathematical bias. The extent to which the apparent vertical wavelength is reliable is discussed. The Coriolis parameter has negligible effects on the comparison between the true vertical wavelength and the apparent one.

## Abstract

This paper investigates the limitation in calculating the vertical wavelength of downward phase propagating gravity waves from the vertical fluctuation of idealized radiosonde balloons in a homogeneous background environment. The wave signals are artificially observed by an idealized weather balloon with a constant ascent rate. The apparent vertical wavelengths obtained from the moving radiosonde balloon are compared to the true vertical wavelength obtained from the dispersion relation, both in the no-wind case and in the constant zonal flow case. The node method and FFT method are employed to calculate the apparent vertical wavelength from the sounding profile. The difference between the node apparent vertical wavelength and the true vertical wavelength is attributed to the fact that the ascent rate of the balloon and the downward phase speed induce a strong Doppler-shifting bias on the apparent vertical wavelength from the observation records. The difference between the FFT apparent vertical wavelength and the true vertical wavelength includes both the Doppler-shifting bias and the mathematical bias. The extent to which the apparent vertical wavelength is reliable is discussed. The Coriolis parameter has negligible effects on the comparison between the true vertical wavelength and the apparent one.

## Abstract

We evaluate two stochastic subcolumn generators used in GCMs to emulate subgrid cloud variability enabling comparisons with satellite observations and simulations of certain physical processes. Our evaluation necessitated the creation of a reference observational dataset that resolves horizontal and vertical cloud variability. The dataset combines two CloudSat cloud products that resolve two-dimensional cloud optical depth variability of liquid, ice, and mixed phase clouds when blended at ~200 m vertical and ~ 2 km horizontal scales. Upon segmenting the dataset to individual “scenes”, mean profiles of the cloud fields are passed as input to generators that produce scene-level cloud subgrid variability. The assessment of generator performance at the scale of individual scenes and in a mean sense is largely based on inferred joint histograms that partition cloud fraction within predetermined combinations of cloud top pressure – cloud optical thickness ranges. Our main finding is that both generators tend to underestimate optically thin clouds, while one of them also tends to overestimate some cloud types of moderate and high optical thickness. Associated radiative flux errors are also calculated by applying a simple transformation to the cloud fraction histogram errors, and are found to approach values almost as high as 3 W m^{−2} for the cloud radiative effect in the shortwave part of the spectrum.

## Abstract

We evaluate two stochastic subcolumn generators used in GCMs to emulate subgrid cloud variability enabling comparisons with satellite observations and simulations of certain physical processes. Our evaluation necessitated the creation of a reference observational dataset that resolves horizontal and vertical cloud variability. The dataset combines two CloudSat cloud products that resolve two-dimensional cloud optical depth variability of liquid, ice, and mixed phase clouds when blended at ~200 m vertical and ~ 2 km horizontal scales. Upon segmenting the dataset to individual “scenes”, mean profiles of the cloud fields are passed as input to generators that produce scene-level cloud subgrid variability. The assessment of generator performance at the scale of individual scenes and in a mean sense is largely based on inferred joint histograms that partition cloud fraction within predetermined combinations of cloud top pressure – cloud optical thickness ranges. Our main finding is that both generators tend to underestimate optically thin clouds, while one of them also tends to overestimate some cloud types of moderate and high optical thickness. Associated radiative flux errors are also calculated by applying a simple transformation to the cloud fraction histogram errors, and are found to approach values almost as high as 3 W m^{−2} for the cloud radiative effect in the shortwave part of the spectrum.

## Abstract

An empirically derived statistic is used to estimate the confidence interval of a dissipation estimate that uses a finite amount of shear data. Four co-located shear probes, mounted on a bottom anchored float, are used to measure the rate of dissipation of turbulence kinetic energy, *∈*, at a height of 15 m above the bottom in a 55 m deep tidal channel. One pair of probes measures *∂w/∂x* while the other measures *∂v/∂x*, where *w* and *v* are the vertical and lateral velocity. The shear-probe signals are converted into a regularly resampled space-series to permit the rate of dissipation to be estimated directly from the variance of the shear using, *v*-component), for averaging lengths, *L* ranging from 1 to 10^{4} Kolmogorov lengths. While the rate of dissipation fluctuates by more than a factor of 100, the fluctuations of the differences of ln (*∈ ^{−L}*) between pairs of probes are stationary, zero-mean, and distributed normally for averaging lengths of

*L*=~ 30 to 10

^{4}Kolmogorov lengths. The variance of the differences,

*L*

^{−7/9}, independent of stratification for buoyancy Reynolds numbers larger than ~ 600, and for dissipation rates from ~ 10

^{−10}to ~ 10

^{−5}W kg

^{−1}. The variance decreases more slowly than

*L*

^{−1}because the averaging is done in linear space while the variance is evaluated in logarithmic space. This statistic provides the confidence interval of an

*∈*estimate such as the 95% interval

*∈*estimates that are made by way of spectral integration, after

*L*is adjusted for the truncation of the shear spectrum.

## Abstract

An empirically derived statistic is used to estimate the confidence interval of a dissipation estimate that uses a finite amount of shear data. Four co-located shear probes, mounted on a bottom anchored float, are used to measure the rate of dissipation of turbulence kinetic energy, *∈*, at a height of 15 m above the bottom in a 55 m deep tidal channel. One pair of probes measures *∂w/∂x* while the other measures *∂v/∂x*, where *w* and *v* are the vertical and lateral velocity. The shear-probe signals are converted into a regularly resampled space-series to permit the rate of dissipation to be estimated directly from the variance of the shear using, *v*-component), for averaging lengths, *L* ranging from 1 to 10^{4} Kolmogorov lengths. While the rate of dissipation fluctuates by more than a factor of 100, the fluctuations of the differences of ln (*∈ ^{−L}*) between pairs of probes are stationary, zero-mean, and distributed normally for averaging lengths of

*L*=~ 30 to 10

^{4}Kolmogorov lengths. The variance of the differences,

*L*

^{−7/9}, independent of stratification for buoyancy Reynolds numbers larger than ~ 600, and for dissipation rates from ~ 10

^{−10}to ~ 10

^{−5}W kg

^{−1}. The variance decreases more slowly than

*L*

^{−1}because the averaging is done in linear space while the variance is evaluated in logarithmic space. This statistic provides the confidence interval of an

*∈*estimate such as the 95% interval

*∈*estimates that are made by way of spectral integration, after

*L*is adjusted for the truncation of the shear spectrum.

## Abstract

Recently introduced in oceanography to interpret the near surface circulation, *Transition Path Theory* (*TPT*) is a methodology that rigorously characterizes ensembles of trajectory pieces flowing out from a source last and into a target next, i.e., those that most productively contribute to transport. Here we use TPT to frame, in a statistically more robust fashion than earlier analysis, equatorward routes of North Atlantic Deep Water (NADW) in the subpolar North Atlantic. TPT is applied on all available RAFOS and Argo floats in the area by means of a discretization of the Lagrangian dynamics described by their trajectories. By considering floats at different depths, we investigate transition paths of NADW in its upper (UNADW) and lower (LNADW) layers. We find that the majority of UNADW transition paths sourced in the Labrador and southwestern Irminger Seas reach the western side of a target arranged zonally along the southern edge of the subpolar North Atlantic domain visited by the floats. This is accomplished in the form of a well-organized deep boundary current (DBC). LNADW transition paths sourced west of the Reykjanes Ridge reveal a similar pattern, while those sourced east of the ridge are found to hit the western side of the target via a DBC and also several other places along it in a less organized fashion, indicating southward flow along the eastern and western flanks of the Mid-Atlantic Ridge. Naked-eye inspection of trajectories suggest generally much more diffusive equatorward NADW routes. A source-independent dynamical decomposition of the flow domain into analogous backward-time basins of attraction, beyond the reach of direct inspection of trajectories, reveals a much wider influence of the western side of the target for UNADW than for LNADW. For UNADW, the average expected duration of the pathways from the Labrador and Irminger Seas was found to be of 2 to 3 years. For LNADW, the duration was found to be influenced by the Reykjanes Ridge, being as long as 8 years from the western side of the ridge and of about 3 years on average from its eastern side.

## Abstract

Recently introduced in oceanography to interpret the near surface circulation, *Transition Path Theory* (*TPT*) is a methodology that rigorously characterizes ensembles of trajectory pieces flowing out from a source last and into a target next, i.e., those that most productively contribute to transport. Here we use TPT to frame, in a statistically more robust fashion than earlier analysis, equatorward routes of North Atlantic Deep Water (NADW) in the subpolar North Atlantic. TPT is applied on all available RAFOS and Argo floats in the area by means of a discretization of the Lagrangian dynamics described by their trajectories. By considering floats at different depths, we investigate transition paths of NADW in its upper (UNADW) and lower (LNADW) layers. We find that the majority of UNADW transition paths sourced in the Labrador and southwestern Irminger Seas reach the western side of a target arranged zonally along the southern edge of the subpolar North Atlantic domain visited by the floats. This is accomplished in the form of a well-organized deep boundary current (DBC). LNADW transition paths sourced west of the Reykjanes Ridge reveal a similar pattern, while those sourced east of the ridge are found to hit the western side of the target via a DBC and also several other places along it in a less organized fashion, indicating southward flow along the eastern and western flanks of the Mid-Atlantic Ridge. Naked-eye inspection of trajectories suggest generally much more diffusive equatorward NADW routes. A source-independent dynamical decomposition of the flow domain into analogous backward-time basins of attraction, beyond the reach of direct inspection of trajectories, reveals a much wider influence of the western side of the target for UNADW than for LNADW. For UNADW, the average expected duration of the pathways from the Labrador and Irminger Seas was found to be of 2 to 3 years. For LNADW, the duration was found to be influenced by the Reykjanes Ridge, being as long as 8 years from the western side of the ridge and of about 3 years on average from its eastern side.

## Abstract

This manuscript provides (*i*) the statistical uncertainty of a shear spectrum and (*ii*) a new universal shear spectrum, and (*iii*) shows how these are combined to quantify the quality of a shear spectrum. The data from four co-located shear probes, described in Part 1 (Lueck 2022) are used to estimate the spectra of shear, Ψ(*k*), for wavenumbers *k* ≥ 2 cpm, from data lengths of 1.0 to 50.5 m, using Fourier transform (FT) segments of 0.5 m length. The differences of the logarithm of pairs of simultaneous shear spectra are stationary, distributed normally, independent of the rate of dissipation, and only weakly dependent on wavenumber. The variance of the logarithm of an individual spectrum, *σ*
^{2}
_{lnΨ}, equals one-half of the variance of these differences and is *σ*
^{2}
_{lnΨ} = 1.25*N*
^{−7/9}
_{ƒ}, where *Nƒ* is the number of FT segments used to estimate the spectrum. *σlnΨ* provides the statistical basis for constructing the confidence interval of the logarithm of spectrum, and thus, the spectrum itself.

A universal spectrum of turbulence shear is derived from the nondimensionalization of 14600 spectra estimated from 5 m segments of data. This spectrum differs from the Nasmyth spectrum (Oakey 1982) and from the spectrum of Panchev and Kesich (1969) by 8% near its peak, and is approximated to within 1% by a new analytic equation.

The difference between the logarithms of a measured and a universal spectrum, together with the confidence interval of a spectrum, provides the statistical basis for quantifying the quality of a measured shear (and velocity) spectrum, and the quality of a dissipation estimate that is derived from the spectrum.

## Abstract

This manuscript provides (*i*) the statistical uncertainty of a shear spectrum and (*ii*) a new universal shear spectrum, and (*iii*) shows how these are combined to quantify the quality of a shear spectrum. The data from four co-located shear probes, described in Part 1 (Lueck 2022) are used to estimate the spectra of shear, Ψ(*k*), for wavenumbers *k* ≥ 2 cpm, from data lengths of 1.0 to 50.5 m, using Fourier transform (FT) segments of 0.5 m length. The differences of the logarithm of pairs of simultaneous shear spectra are stationary, distributed normally, independent of the rate of dissipation, and only weakly dependent on wavenumber. The variance of the logarithm of an individual spectrum, *σ*
^{2}
_{lnΨ}, equals one-half of the variance of these differences and is *σ*
^{2}
_{lnΨ} = 1.25*N*
^{−7/9}
_{ƒ}, where *Nƒ* is the number of FT segments used to estimate the spectrum. *σlnΨ* provides the statistical basis for constructing the confidence interval of the logarithm of spectrum, and thus, the spectrum itself.

A universal spectrum of turbulence shear is derived from the nondimensionalization of 14600 spectra estimated from 5 m segments of data. This spectrum differs from the Nasmyth spectrum (Oakey 1982) and from the spectrum of Panchev and Kesich (1969) by 8% near its peak, and is approximated to within 1% by a new analytic equation.

The difference between the logarithms of a measured and a universal spectrum, together with the confidence interval of a spectrum, provides the statistical basis for quantifying the quality of a measured shear (and velocity) spectrum, and the quality of a dissipation estimate that is derived from the spectrum.

## Abstract

Simulated weather time series are often used in engineering and research practice to assess radar systems behavior and/or to evaluate the performance of novel techniques. There are two main approaches to simulating weather time series. One is based on summing individual returns from a large number of distributed weather particles to create a cumulative return. The other is aimed at creating simulated random signals based on the predetermined values of radar observables and is of interest herein. So far, several methods to simulate weather time series, using the latter approach, have been proposed. All of these methods are based on applying the inverse discrete Fourier transform to the spectral model with added random fluctuations. To meet the desired simulation accuracy, such an approach typically requires generating the number of samples that is larger than the base sample number due to the discrete Fourier transform properties. In that regard, a novel method to determine simulation length is proposed. It is based on a detailed theoretical development that demonstrates the exact source of errors incurred by this approach. Furthermore, a simple method for time series simulation that is based on the autocorrelation matrix exists. This method neither involves manipulations in the spectral domain nor requires generating the number of samples larger than the base sample number. Herein, this method is suggested for weather time series simulation and its accuracy and efficiency are analyzed and compared to the spectral-based approach.

## Abstract

Simulated weather time series are often used in engineering and research practice to assess radar systems behavior and/or to evaluate the performance of novel techniques. There are two main approaches to simulating weather time series. One is based on summing individual returns from a large number of distributed weather particles to create a cumulative return. The other is aimed at creating simulated random signals based on the predetermined values of radar observables and is of interest herein. So far, several methods to simulate weather time series, using the latter approach, have been proposed. All of these methods are based on applying the inverse discrete Fourier transform to the spectral model with added random fluctuations. To meet the desired simulation accuracy, such an approach typically requires generating the number of samples that is larger than the base sample number due to the discrete Fourier transform properties. In that regard, a novel method to determine simulation length is proposed. It is based on a detailed theoretical development that demonstrates the exact source of errors incurred by this approach. Furthermore, a simple method for time series simulation that is based on the autocorrelation matrix exists. This method neither involves manipulations in the spectral domain nor requires generating the number of samples larger than the base sample number. Herein, this method is suggested for weather time series simulation and its accuracy and efficiency are analyzed and compared to the spectral-based approach.

## Abstract

Random errors (uncertainties) in COSMIC radio occultation (RO) soundings, ERA-Interim (ERAi) reanalyses, and high-resolution radiosondes (RS) are estimated in the northeast Pacific Ocean during the MAGIC campaign in 2012 and 2013 using the three-cornered hat method. Estimated refractivity and bending angle errors peak at ∼2 km, and have a secondary peak at ∼15 km. They are related to vertical and horizontal gradients of temperature and water vapor and associated atmospheric variability at these two levels. MAGIC RS refractivity and bending angles obtained from forward models have the largest uncertainties, followed by COSMIC RO soundings. ERAi has the smallest uncertainties. The large RS uncertainties can be primarily attributed to representativeness errors (differences). Differences in space and time of the RO and model data sets from the RS observations, error correlations among data sets, and the small sample size are other possible reasons contributing to these differences of estimated error statistics.

RO temperature and humidity are retrieved from refractivity using a one-dimensional variational (1D-Var) method from the COSMIC Data Analysis and Archive Center (CDAAC). The estimated errors for COSMIC temperature are comparable to those of the MAGIC RS except near 1 km, where they are much higher. The estimated errors for COSMIC specific humidity are similar to the MAGIC specific humidity errors below ∼5 km and much smaller above this level.

Estimates of COSMIC random errors based on ERAi, JRA-55, and MERRA-2 reanalyses in the same region, as well as comparison with estimates from other studies, support the reliability of our estimates.

## Abstract

Random errors (uncertainties) in COSMIC radio occultation (RO) soundings, ERA-Interim (ERAi) reanalyses, and high-resolution radiosondes (RS) are estimated in the northeast Pacific Ocean during the MAGIC campaign in 2012 and 2013 using the three-cornered hat method. Estimated refractivity and bending angle errors peak at ∼2 km, and have a secondary peak at ∼15 km. They are related to vertical and horizontal gradients of temperature and water vapor and associated atmospheric variability at these two levels. MAGIC RS refractivity and bending angles obtained from forward models have the largest uncertainties, followed by COSMIC RO soundings. ERAi has the smallest uncertainties. The large RS uncertainties can be primarily attributed to representativeness errors (differences). Differences in space and time of the RO and model data sets from the RS observations, error correlations among data sets, and the small sample size are other possible reasons contributing to these differences of estimated error statistics.

RO temperature and humidity are retrieved from refractivity using a one-dimensional variational (1D-Var) method from the COSMIC Data Analysis and Archive Center (CDAAC). The estimated errors for COSMIC temperature are comparable to those of the MAGIC RS except near 1 km, where they are much higher. The estimated errors for COSMIC specific humidity are similar to the MAGIC specific humidity errors below ∼5 km and much smaller above this level.

Estimates of COSMIC random errors based on ERAi, JRA-55, and MERRA-2 reanalyses in the same region, as well as comparison with estimates from other studies, support the reliability of our estimates.

## Abstract

The Wallops Precipitation Research Facility (WPRF) at NASA Goddard Space Flight Center, Wallops Island, VA has been established as a semi-permanent super-site for the Global Precipitation Measurement (GPM) Ground Validation (GV) program. WPRF is home to research quality precipitation instruments, including NASA’s S-band dual-polarimetric radar (NPOL), and a network of profiling radars, disdrometers, and rain gauges. This study investigates the statistical agreement of the GPM Core Observatory Dual Frequency Precipitation Radar (DPR), combined DPR-GPM Microwave Imager (GMI) and GMI Level II precipitation retrievals compared to WPRF ground observations from a six-year collection of satellite overpasses. Multi-sensor observations are integrated using the System for Integrating Multiplatform Data to Build the Atmospheric Column (SIMBA) software package. SIMBA ensures measurements recorded in a variety of formats are synthesized into a common reference frame for ease in comparison and analysis. Given that instantaneous satellite measurements are observed above ground level, this study investigates the possibility of a time lag between satellite and surface mass-weighted mean diameter (*D _{m}*), reflectivity (

*Z*), and precipitation rate (

*R*) observations. Results indicate that time lags vary up to 30 minutes after overpass time but are not consistent between cases. In addition, GPM Core

*D*retrievals are within Level I mission science requirements as compared to WPRF ground observations. Results also indicate GPM algorithms overestimate light rain (< 1.0 mm hr

_{m}^{−}

^{1}). Two very different stratiform rain vertical profiles show differing results when compared to ground reference data. A key finding of this study indicates multi-sensor DPR/GMI combined algorithms outperform single sensor DPR algorithm.

## Abstract

The Wallops Precipitation Research Facility (WPRF) at NASA Goddard Space Flight Center, Wallops Island, VA has been established as a semi-permanent super-site for the Global Precipitation Measurement (GPM) Ground Validation (GV) program. WPRF is home to research quality precipitation instruments, including NASA’s S-band dual-polarimetric radar (NPOL), and a network of profiling radars, disdrometers, and rain gauges. This study investigates the statistical agreement of the GPM Core Observatory Dual Frequency Precipitation Radar (DPR), combined DPR-GPM Microwave Imager (GMI) and GMI Level II precipitation retrievals compared to WPRF ground observations from a six-year collection of satellite overpasses. Multi-sensor observations are integrated using the System for Integrating Multiplatform Data to Build the Atmospheric Column (SIMBA) software package. SIMBA ensures measurements recorded in a variety of formats are synthesized into a common reference frame for ease in comparison and analysis. Given that instantaneous satellite measurements are observed above ground level, this study investigates the possibility of a time lag between satellite and surface mass-weighted mean diameter (*D _{m}*), reflectivity (

*Z*), and precipitation rate (

*R*) observations. Results indicate that time lags vary up to 30 minutes after overpass time but are not consistent between cases. In addition, GPM Core

*D*retrievals are within Level I mission science requirements as compared to WPRF ground observations. Results also indicate GPM algorithms overestimate light rain (< 1.0 mm hr

_{m}^{−}

^{1}). Two very different stratiform rain vertical profiles show differing results when compared to ground reference data. A key finding of this study indicates multi-sensor DPR/GMI combined algorithms outperform single sensor DPR algorithm.