Browse
Abstract
The article compares four lightning detection networks, provides a brief overview of lightning observation data assimilation in numerical weather forecasts, and describes and illustrates the used procedure of lightning location and time assimilation in numerical weather forecasting. Evaluations of absolute errors in temperatures of air at 2 m, humidity at 2 m, air pressure near the surface, wind speed at 10 m, and precipitation are provided for 10 forecasts made in 2020 for days on which intensive thunderstorms were observed in the Krasnodar region of Russia. It has been found that average errors for the forecast area for 24, 48, and 72 h of the forecast decreased for all parameters when assimilation of observed lightning data is used for forecasting. It has been shown that the predicted precipitation field configuration and intensity became closer to references for both areas where thunderstorms were observed and the areas where no thunderstorms occurred.
Abstract
The article compares four lightning detection networks, provides a brief overview of lightning observation data assimilation in numerical weather forecasts, and describes and illustrates the used procedure of lightning location and time assimilation in numerical weather forecasting. Evaluations of absolute errors in temperatures of air at 2 m, humidity at 2 m, air pressure near the surface, wind speed at 10 m, and precipitation are provided for 10 forecasts made in 2020 for days on which intensive thunderstorms were observed in the Krasnodar region of Russia. It has been found that average errors for the forecast area for 24, 48, and 72 h of the forecast decreased for all parameters when assimilation of observed lightning data is used for forecasting. It has been shown that the predicted precipitation field configuration and intensity became closer to references for both areas where thunderstorms were observed and the areas where no thunderstorms occurred.
Abstract
Performance assessments of the Geostationary Lightning Mapper (GLM) are conducted via comparisons with independent observations from both satellite-based sensors and ground-based lightning detection (reference) networks. A key limitation of this evaluation is that the performance of the reference networks is both imperfect and imperfectly known, such that the true performance of GLM can only be estimated. Key GLM performance metrics such as detection efficiency (DE) and false alarm rate (FAR) retrieved through comparison with reference networks are affected by those networks’ own DE, FAR, and spatiotemporal accuracy, as well as the flash matching criteria applied in the analysis. This study presents a Monte Carlo simulation–based inversion technique that is used to quantify how accurately the reference networks can assess GLM performance, as well as suggest the optimal matching criteria for estimating GLM performance. This is accomplished by running simulations that clarify the specific effect of reference network quality (i.e., DE, FAR, spatiotemporal accuracy, and the geographical patterns of these attributes) on the retrieved GLM performance metrics. Baseline reference network statistics are derived from the Earth Networks Global Lightning Network (ENGLN) and the Global Lightning Dataset (GLD360). Geographic simulations indicate that the retrieved GLM DE is underestimated, with absolute errors ranging from 11% to 32%, while the retrieved GLM FAR is overestimated, with absolute errors of approximately 16% to 44%. GLM performance is most severely underestimated in the South Pacific. These results help quantify and bound the actual performance of GLM and the attendant uncertainties when comparing GLM to imperfect reference networks.
Abstract
Performance assessments of the Geostationary Lightning Mapper (GLM) are conducted via comparisons with independent observations from both satellite-based sensors and ground-based lightning detection (reference) networks. A key limitation of this evaluation is that the performance of the reference networks is both imperfect and imperfectly known, such that the true performance of GLM can only be estimated. Key GLM performance metrics such as detection efficiency (DE) and false alarm rate (FAR) retrieved through comparison with reference networks are affected by those networks’ own DE, FAR, and spatiotemporal accuracy, as well as the flash matching criteria applied in the analysis. This study presents a Monte Carlo simulation–based inversion technique that is used to quantify how accurately the reference networks can assess GLM performance, as well as suggest the optimal matching criteria for estimating GLM performance. This is accomplished by running simulations that clarify the specific effect of reference network quality (i.e., DE, FAR, spatiotemporal accuracy, and the geographical patterns of these attributes) on the retrieved GLM performance metrics. Baseline reference network statistics are derived from the Earth Networks Global Lightning Network (ENGLN) and the Global Lightning Dataset (GLD360). Geographic simulations indicate that the retrieved GLM DE is underestimated, with absolute errors ranging from 11% to 32%, while the retrieved GLM FAR is overestimated, with absolute errors of approximately 16% to 44%. GLM performance is most severely underestimated in the South Pacific. These results help quantify and bound the actual performance of GLM and the attendant uncertainties when comparing GLM to imperfect reference networks.
Abstract
The current study develops a variant of the VAD method to retrieve thunderstorm peak event velocities using low-elevation WSR-88D radar scans. The main challenge pertains to the localized nature of thunderstorm winds, which complicates single-Doppler retrievals as it dictates the use of a limited spatial scale. Since VAD methods assume constant velocity in the fitted section, it is important that retrieved sections do not contain background flow. Accordingly, the current study proposes an image processing method to partition scans into regions, representing events and the background flows, that can be retrieved independently. The study compares the retrieved peak velocities to retrievals using another VAD method. The proposed technique is found to estimate peak event velocities that are closer to measured ASOS readings, making it more suitable for historical analysis. The study also compares the results of retrievals from over 2600 thunderstorm events from 19 radar–ASOS station combinations that are less than 10 km away from the radar. Comparisons of probability distributions of peak event velocities for ASOS readings and radar retrievals showed good agreement for stations within 4 km from the radar while more distant stations had a higher bias toward retrieved velocities compared to ASOS velocities. The mean absolute error for velocity magnitude increases with height ranging between 1.5 and 4.5 m s−1. A proposed correction based on the exponential trend of mean errors was shown to improve the probability distribution comparisons, especially for higher velocity magnitudes.
Abstract
The current study develops a variant of the VAD method to retrieve thunderstorm peak event velocities using low-elevation WSR-88D radar scans. The main challenge pertains to the localized nature of thunderstorm winds, which complicates single-Doppler retrievals as it dictates the use of a limited spatial scale. Since VAD methods assume constant velocity in the fitted section, it is important that retrieved sections do not contain background flow. Accordingly, the current study proposes an image processing method to partition scans into regions, representing events and the background flows, that can be retrieved independently. The study compares the retrieved peak velocities to retrievals using another VAD method. The proposed technique is found to estimate peak event velocities that are closer to measured ASOS readings, making it more suitable for historical analysis. The study also compares the results of retrievals from over 2600 thunderstorm events from 19 radar–ASOS station combinations that are less than 10 km away from the radar. Comparisons of probability distributions of peak event velocities for ASOS readings and radar retrievals showed good agreement for stations within 4 km from the radar while more distant stations had a higher bias toward retrieved velocities compared to ASOS velocities. The mean absolute error for velocity magnitude increases with height ranging between 1.5 and 4.5 m s−1. A proposed correction based on the exponential trend of mean errors was shown to improve the probability distribution comparisons, especially for higher velocity magnitudes.
Abstract
We assess the performance of three different algorithms for estimating surface ocean currents from two linear array HF radar systems. The delay-and-sum beamforming algorithm, commonly used with beamforming systems, is compared with two direction-finding algorithms: Multiple Signal Classification (MUSIC) and direction finding using beamforming (Beamscan). A 7-month dataset from two HF radar sites (CSW and GTN) on Long Bay, South Carolina (United States), is used to compare the different methods. The comparison is carried out on three locations (midpoint along the baseline and two locations with in situ Eulerian current data available) representing different steering angles. Beamforming produces surface current data that show high correlation near the radar boresight (R 2 ≥ 0.79). At partially sheltered locations far from the radar boresight directions (59° and 48° for radar sites CSW and GTN, respectively) there is no correlation for CSW (R 2 = 0) and the correlation is reduced significantly for GTN (R 2 = 0.29). Beamscan performs similarly near the radar boresight (R 2 = 0.8 and 0.85 for CSW and GTN, respectively) but better than beamforming far from the radar boresight (R 2 = 0.52 and 0.32 for CSW and GTN, respectively). MUSIC’s performance, after significant tuning, is similar near the boresight (R 2 = 0.78 and 0.84 for CSW and GTN) while worse than Beamscan but better than beamforming far from the boresight (R 2 = 0.42 and 0.27 for CSW and GTN, respectively). Comparisons at the midpoint (baseline comparison) show the largest performance difference between methods. Beamforming (R 2 = 0.01) is the worst performer, followed by MUSIC (R 2 = 0.37) while Beamscan (R 2 = 0.76) performs best.
Abstract
We assess the performance of three different algorithms for estimating surface ocean currents from two linear array HF radar systems. The delay-and-sum beamforming algorithm, commonly used with beamforming systems, is compared with two direction-finding algorithms: Multiple Signal Classification (MUSIC) and direction finding using beamforming (Beamscan). A 7-month dataset from two HF radar sites (CSW and GTN) on Long Bay, South Carolina (United States), is used to compare the different methods. The comparison is carried out on three locations (midpoint along the baseline and two locations with in situ Eulerian current data available) representing different steering angles. Beamforming produces surface current data that show high correlation near the radar boresight (R 2 ≥ 0.79). At partially sheltered locations far from the radar boresight directions (59° and 48° for radar sites CSW and GTN, respectively) there is no correlation for CSW (R 2 = 0) and the correlation is reduced significantly for GTN (R 2 = 0.29). Beamscan performs similarly near the radar boresight (R 2 = 0.8 and 0.85 for CSW and GTN, respectively) but better than beamforming far from the radar boresight (R 2 = 0.52 and 0.32 for CSW and GTN, respectively). MUSIC’s performance, after significant tuning, is similar near the boresight (R 2 = 0.78 and 0.84 for CSW and GTN) while worse than Beamscan but better than beamforming far from the boresight (R 2 = 0.42 and 0.27 for CSW and GTN, respectively). Comparisons at the midpoint (baseline comparison) show the largest performance difference between methods. Beamforming (R 2 = 0.01) is the worst performer, followed by MUSIC (R 2 = 0.37) while Beamscan (R 2 = 0.76) performs best.
Abstract
Variation of the Kuroshio path south of Japan has an important impact on weather, climate, and ecosystems due to its distinct features. Motivated by the ever-popular deep learning methods using neural network architectures in areas where more accurate reference data for oceanographic observations and reanalysis are available, we build four deep learning models based on the long short-term memory (LSTM) neural network, combined with the empirical orthogonal function (EOF) and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), namely, the LSTM, EOF–LSTM, CEEMDAN–LSTM, and EOF–CEEMDAN–LSTM. Using these models, we conduct long-range predictions (120 days) of the Kuroshio path south of Japan based on 50-yr ocean reanalysis and nearly 15 years of satellite altimeter data. We show that the EOF–CEEMDAN–LSTM performs the best among the four models, by attaining approximately 0.739 anomaly correlation coefficient and 0.399° root-mean-square error for the 120-day prediction of the Kuroshio path south of Japan. The hindcasts of the EOF–CEEMDAN–LSTM are successful in reproducing the observed formation and decay of the Kuroshio large meander during 2004/05, and the formation of the latest large meander in 2017. Finally, we present predictions of the Kuroshio path south of Japan at 120-day lead time, which suggest that the Kuroshio will remain in the state of the large meander until November 2022.
Abstract
Variation of the Kuroshio path south of Japan has an important impact on weather, climate, and ecosystems due to its distinct features. Motivated by the ever-popular deep learning methods using neural network architectures in areas where more accurate reference data for oceanographic observations and reanalysis are available, we build four deep learning models based on the long short-term memory (LSTM) neural network, combined with the empirical orthogonal function (EOF) and complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), namely, the LSTM, EOF–LSTM, CEEMDAN–LSTM, and EOF–CEEMDAN–LSTM. Using these models, we conduct long-range predictions (120 days) of the Kuroshio path south of Japan based on 50-yr ocean reanalysis and nearly 15 years of satellite altimeter data. We show that the EOF–CEEMDAN–LSTM performs the best among the four models, by attaining approximately 0.739 anomaly correlation coefficient and 0.399° root-mean-square error for the 120-day prediction of the Kuroshio path south of Japan. The hindcasts of the EOF–CEEMDAN–LSTM are successful in reproducing the observed formation and decay of the Kuroshio large meander during 2004/05, and the formation of the latest large meander in 2017. Finally, we present predictions of the Kuroshio path south of Japan at 120-day lead time, which suggest that the Kuroshio will remain in the state of the large meander until November 2022.
Abstract
This study was to assess the raindrop fall speed measurement capabilities of OTT-Parsivel2 disdrometer through comparisons with measurements of a collocated High-speed Optical Disdrometer (HOD). Raindrop fall speed is often assumed to be terminal in relevant hydrological and meteorological applications, and generally predicted using terminal speed - raindrop size relationships obtained from laboratory observations. Nevertheless, recent field studies have revealed that other factors (e.g. wind, turbulence, raindrop oscillations, and collisions) significantly influence raindrop fall speed, necessitating accurate fall speed measurements for many applications instead of reliance on laboratory-based terminal speed predictions. Field observations in this study covered rainfall events with a variety of environmental conditions, including light, moderate, and heavy rainfall events. This study also involved rigorous laboratory experiments to faithfully identify the internal filtering and calculation algorithm of OTT Parsivel2. Our assessments revealed that, for the smaller diameter bins, Parsivel2 filters out many of the observed raindrops that fall faster than predicted terminal speeds, bringing down the mean fall speed for those size bins without observational evidence. Furthermore, Parsivel2 fall speed measurements exhibited a notable artificial bell-shaped deviations from the predicted terminal speeds towards sub-terminal fall starting at around 1 mm diameter raindrops with peak deviations around 1.625 mm diameter bin. Such bell-shaped fall speed deviation patterns were not present in collocated HOD measurements. Assessment results along with the faithfully identified Parsivel2 algorithm are presented with discussions on implications on reported raindrop size distributions (DSD) and rainfall kinetic energy.
Abstract
This study was to assess the raindrop fall speed measurement capabilities of OTT-Parsivel2 disdrometer through comparisons with measurements of a collocated High-speed Optical Disdrometer (HOD). Raindrop fall speed is often assumed to be terminal in relevant hydrological and meteorological applications, and generally predicted using terminal speed - raindrop size relationships obtained from laboratory observations. Nevertheless, recent field studies have revealed that other factors (e.g. wind, turbulence, raindrop oscillations, and collisions) significantly influence raindrop fall speed, necessitating accurate fall speed measurements for many applications instead of reliance on laboratory-based terminal speed predictions. Field observations in this study covered rainfall events with a variety of environmental conditions, including light, moderate, and heavy rainfall events. This study also involved rigorous laboratory experiments to faithfully identify the internal filtering and calculation algorithm of OTT Parsivel2. Our assessments revealed that, for the smaller diameter bins, Parsivel2 filters out many of the observed raindrops that fall faster than predicted terminal speeds, bringing down the mean fall speed for those size bins without observational evidence. Furthermore, Parsivel2 fall speed measurements exhibited a notable artificial bell-shaped deviations from the predicted terminal speeds towards sub-terminal fall starting at around 1 mm diameter raindrops with peak deviations around 1.625 mm diameter bin. Such bell-shaped fall speed deviation patterns were not present in collocated HOD measurements. Assessment results along with the faithfully identified Parsivel2 algorithm are presented with discussions on implications on reported raindrop size distributions (DSD) and rainfall kinetic energy.
Abstract
Stripe noise is a common issue in sea surface temperatures (SSTs) retrieved from thermal infrared data obtained by satellite-based multidetector radiometers. We developed a bispectral filter (BSF) to reduce the stripe noise. The BSF is a Gaussian filter and an optimal estimation method for the differences between the data obtained at the split window. A kernel function based on the physical processes of radiative transfer has made it possible to reduce stripe and random noise in retrieved SSTs without degrading the spatial resolution or generating bias. The Second-Generation Global Imager (SGLI) is an optical sensor on board the Global Change Observation Mission–Climate (GCOM-C) satellite. We applied the BSF to SGLI data and validated the retrieved SSTs. The validation results demonstrate the effectiveness of BSF, which reduced stripe noise in the retrieved SGLI SSTs without blurring SST fronts. It also improved the accuracy of the SSTs by about 0.04 K (about 13%) in the robust standard deviation.
Significance Statement
This method reduces stripe noise and improves the accuracy of SST data with minimal compromise of spatial resolution. The method assumes the relationship between the brightness temperature and the brightness temperature difference in the split window based on the physical background of atmospheric radiative transfer. The physical background of the data provides an easy solution to a complex problem. Although destriping generally requires a complex algorithm, our approach is based on a simple Gaussian filter and is easy to implement.
Abstract
Stripe noise is a common issue in sea surface temperatures (SSTs) retrieved from thermal infrared data obtained by satellite-based multidetector radiometers. We developed a bispectral filter (BSF) to reduce the stripe noise. The BSF is a Gaussian filter and an optimal estimation method for the differences between the data obtained at the split window. A kernel function based on the physical processes of radiative transfer has made it possible to reduce stripe and random noise in retrieved SSTs without degrading the spatial resolution or generating bias. The Second-Generation Global Imager (SGLI) is an optical sensor on board the Global Change Observation Mission–Climate (GCOM-C) satellite. We applied the BSF to SGLI data and validated the retrieved SSTs. The validation results demonstrate the effectiveness of BSF, which reduced stripe noise in the retrieved SGLI SSTs without blurring SST fronts. It also improved the accuracy of the SSTs by about 0.04 K (about 13%) in the robust standard deviation.
Significance Statement
This method reduces stripe noise and improves the accuracy of SST data with minimal compromise of spatial resolution. The method assumes the relationship between the brightness temperature and the brightness temperature difference in the split window based on the physical background of atmospheric radiative transfer. The physical background of the data provides an easy solution to a complex problem. Although destriping generally requires a complex algorithm, our approach is based on a simple Gaussian filter and is easy to implement.
Abstract
Our study shows that the intercomparison among sea surface temperature (SST) products is influenced by the choice of SST reference, and the interpolation of SST products. The influence of reference SST depends on whether the reference SST are averaged to a grid or in pointwise in situ locations, including buoy or Argo observations, and filtered by first-guess or climatology quality control (QC) algorithms. The influence of the interpolation depends on whether SST products are in their original grids or pre-processed into common coarse grids.
The impacts of these factors are demonstrated in our assessments of eight widely used SST products (DOISST, MUR25, MGDSST, GAMSSA, OSTIA, GPB, CCI, CMC) relative to buoy observations: (a) when the reference SSTs are averaged onto 0.25°×0.25° grid boxes, the magnitude of biases is lower in DOISST and MGDSST (<0.03°C), and magnitude of root-mean-square-differences (RMSDs) is lower in DOISST (0.38°C) and OSTIA (0.43°C); (b) when the same reference SSTs are evaluated at pointwise in situ locations, the standard deviations (SDs) are smaller in DOISST (0.38°C) and OSTIA (0.39°C) on 0.25°×0.25° grids; but the SDs become smaller in OSTIA (0.34°C) and CMC (0.37°C) on products original grids, showing the advantage of those high-resolution analyses for resolving finer scale SSTs; (c) when a loose QC algorithm is applied to the reference buoy observations, SDs increase; and vice versa; however, the relative performance of products remains the same; and (d) when the drifting-buoy or Argo observations are used as the reference, the magnitude of RMSDs and SDs become smaller, potentially due to changes in observing intervals. These results suggest that high-resolution SST analyses may take advantage in intercomparisons.
Abstract
Our study shows that the intercomparison among sea surface temperature (SST) products is influenced by the choice of SST reference, and the interpolation of SST products. The influence of reference SST depends on whether the reference SST are averaged to a grid or in pointwise in situ locations, including buoy or Argo observations, and filtered by first-guess or climatology quality control (QC) algorithms. The influence of the interpolation depends on whether SST products are in their original grids or pre-processed into common coarse grids.
The impacts of these factors are demonstrated in our assessments of eight widely used SST products (DOISST, MUR25, MGDSST, GAMSSA, OSTIA, GPB, CCI, CMC) relative to buoy observations: (a) when the reference SSTs are averaged onto 0.25°×0.25° grid boxes, the magnitude of biases is lower in DOISST and MGDSST (<0.03°C), and magnitude of root-mean-square-differences (RMSDs) is lower in DOISST (0.38°C) and OSTIA (0.43°C); (b) when the same reference SSTs are evaluated at pointwise in situ locations, the standard deviations (SDs) are smaller in DOISST (0.38°C) and OSTIA (0.39°C) on 0.25°×0.25° grids; but the SDs become smaller in OSTIA (0.34°C) and CMC (0.37°C) on products original grids, showing the advantage of those high-resolution analyses for resolving finer scale SSTs; (c) when a loose QC algorithm is applied to the reference buoy observations, SDs increase; and vice versa; however, the relative performance of products remains the same; and (d) when the drifting-buoy or Argo observations are used as the reference, the magnitude of RMSDs and SDs become smaller, potentially due to changes in observing intervals. These results suggest that high-resolution SST analyses may take advantage in intercomparisons.
Abstract
The Chebyshev polynomial fitting (CPF) method has been proved to be effective to construct reliable cotidal charts for the eight major tidal constituents (M2, S2, K1, O1, N2, K2, P1, and Q1) and six minor tidal constituents (2N2, J1, L2, Mu2, Nu2, and T2) near Hawaii in Part I and Part II, respectively. In this paper, this method is extended to estimate the harmonic constants of four long-period tidal constituents (Mf, Mm, Sa, and Ssa). The harmonic constants obtained by this method were compared with those from the TPXO9, Finite Element Solutions 2014 (FES2014), and Empirical Ocean Tide 20 (EOT20) models, using benchmark data from satellite altimeters and eight tide gauges. The accuracies of the Mf and Mm constituents derived from the CPF method are comparable to those from the models, but the accuracies of the Sa and Ssa constituents are significantly higher than those from the FES2014 and EOT20 models. The results indicate that the CPF method is also effective for estimating harmonic constants of long-period tidal constituents. Furthermore, since the CPF method relies only on satellite altimeter data, it is an easier-to-use method than these ocean tide models.
Abstract
The Chebyshev polynomial fitting (CPF) method has been proved to be effective to construct reliable cotidal charts for the eight major tidal constituents (M2, S2, K1, O1, N2, K2, P1, and Q1) and six minor tidal constituents (2N2, J1, L2, Mu2, Nu2, and T2) near Hawaii in Part I and Part II, respectively. In this paper, this method is extended to estimate the harmonic constants of four long-period tidal constituents (Mf, Mm, Sa, and Ssa). The harmonic constants obtained by this method were compared with those from the TPXO9, Finite Element Solutions 2014 (FES2014), and Empirical Ocean Tide 20 (EOT20) models, using benchmark data from satellite altimeters and eight tide gauges. The accuracies of the Mf and Mm constituents derived from the CPF method are comparable to those from the models, but the accuracies of the Sa and Ssa constituents are significantly higher than those from the FES2014 and EOT20 models. The results indicate that the CPF method is also effective for estimating harmonic constants of long-period tidal constituents. Furthermore, since the CPF method relies only on satellite altimeter data, it is an easier-to-use method than these ocean tide models.