Abstract
Accurate estimation of high-resolution hourly rainfall plays an important role in various meteorological and hydrological applications. Rain gauges and Doppler weather radars are prevalent instruments for rainfall observation. While rain gauges can provide precise measurements at particular locations, their spatial resolution remains uncertain. Conversely, Doppler weather radar exhibits nearly opposite characteristics. Leveraging the complementary strengths of both instruments can mitigate their individual limitations. In this study, we introduce a data-driven model named Multisource Spatial Merging Net (MSMN) for radar–gauge merging. This model is a combination of spatial interpolation of rain gauge observations and correction of interpolation errors. The radar reflectivity above the estimated point serves as a feature capturing local spatial precipitation variability, which is incorporated into the interpolation weight and bias correction term via specifically designed neural networks. Moreover, we propose a training loss function to better deal with rainfall areas with large interpolation errors. The proposed model is applied to rainfall estimation in the middle and lower reaches of the Yangtze River. The evaluation results indicate that it performs better than conventional single- and double-source methods in terms of multiple statistical evaluation metrics, such as mean absolute error (MAE), biased MAE, mean relative error (MRE), and correlation coefficient, etc. Furthermore, the efficacy of the proposed loss function is validated by its enhanced performance in areas of intense rainfall.
© 2025 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).