1. Introduction
Precipitation nowcasting has been a long-standing challenge in the field of weather forecasting as it plays a vital role in ensuring the well-being of human life and economy, particularly in sectors that heavily rely on accurate weather information. For example, owing to the occurrence of destructive floods in South Korea, the importance of precise rainfall predictions was highlighted (KMA 2018, 2021; Park et al. 2021). Additionally, accurate precipitation nowcasting helps drivers by predicting road conditions and enhances flight safety by providing weather guidance for regional aviation. Therefore, improving the accuracy of precipitation nowcasting models is necessary.
As precipitation events evolve nonlinearly by undergoing dynamic processes, such as the growth or decay of precipitation fields, novel deep learning-based approaches have been widely employed for precipitation nowcasting. In particular, a deep learning model based on the U-Net (e.g., Ronneberger et al. 2015) convolutional neural network (CNN) has been proposed by various studies (e.g., Agrawal et al. 2019; Ayzel et al. 2020; Ko et al. 2022; Oh et al. 2023). The U-Net is the architecture, used for image-to-image translation. Regarding precipitation nowcasting, image-to-image translation is an example of taking the precipitation image in the past time step and translating it into a future precipitation image (see Chase et al. (2023) for more details about a machine learning tutorial for operational meteorology). The U-Net comprises an encoder and a decoder. The encoder downsamples the input image to capture context and extract the features, whereas the decoder upsamples this information to generate an output. For precipitation nowcasting, the U-Net model comprises an encoder part to receive multitemporal images for extracting the features based on the nonlinear evolution of precipitation events and a decoder part for future image prediction. For instance, Oh et al. (2023) used the U-Net-based model to predict moderate rainfall events (MREs; ≥1 mm h−1) and strong rainfall events (SREs, ≥10 mm h−1) well with critical success indices (CSIs) of 0.6 and 0.4, respectively, at a 1-h lead time. However, the blurry effect of the U-Net-based model makes it less useful for forecasters (e.g., Ayzel et al. 2020; Ravuri et al. 2021). To preserve the spatial resolution, a generative model (e.g., Goodfellow et al. 2014) has been used (e.g., Ravuri et al. 2021). The generative model extracts the distributions of training datasets and generates predictions based on the trained distributions. While the spatial resolution of this model is significantly improved compared with that of U-Net-based models, both its accuracy and prediction times were limited. To enhance prediction times and accuracy, several additional model architectures have been proposed. For instance, Espeholt et al. (2022) implemented convolutional long short-term memory and reported improved precipitation nowcasting results within the 0–12-h lead times. Furthermore, the accuracy and prediction times of generative models can be enhanced by rigorously considering the motion of precipitation fields through the continuity equation for fluids (Zhang et al. 2023).
Linear extrapolation through the motion of precipitation fields is also a potential candidate for precipitation nowcasting, which have been extensively studied so far. A simple method to obtain precipitation nowcasting results is by examining the Eulerian and Lagrangian persistence of radar precipitation images (e.g., Germann and Zawadzki 2002, 2004; Germann et al. 2006; Turner et al. 2004). In Eulerian persistence, the radar image is assumed to be frozen in the time domain (i.e., at any time, the forecast obtained by Eulerian persistence is the same). It has powerful performance for very short lead times; however, the performance rapidly decreases with increasing lead time owing to the lack of the motion of precipitation fields. Lagrangian persistence provides a forecast by advecting precipitation fields through the field of the radar echo motion. To calculate the motion of precipitation fields, the optical flow technique has widely been used to calculate a vector field of motion between two radar images (e.g., Bridson 2008; Brox et al. 2004; Bowler et al. 2004; Seed et al. 2013; Bechini and Chandrasekar 2017; Ayzel et al. 2019; Pulkkinen et al. 2019). In particular, this simple approach has been used for operational precipitation nowcasting (e.g., Lee et al. 2010; Turner et al. 2004). For example, the Korea Meteorological Administration (KMA) provides forecasts with a lead time of up to 6-h using the McGill algorithm for precipitation nowcasting by Lagrangian extrapolation (MAPLE) (Lee et al. 2010). However, the radar echo tracking model has limitations, such as failure of capturing the nonlinear motions and evolution of precipitation fields.
The comparison between the optical flow model and the recently proposed U-Net model has also been conducted (Ayzel et al. 2020). Compared with the U-Net model, the optical flow model produces more realistic localized structures without blurry effects. Conversely, regarding the accuracy of forecasts, the performance of the U-Net model strongly depends on rainfall intensity. The accuracy of the optical flow model for MREs is worse than that of the U-Net model but the optical flow model can provide more precise forecasts for SREs. Thus, a model based on optical flow has the potential to be a competitive model if it can predict nonlinear motions of precipitation and/or newly developed precipitation.
In this study, we aimed to enhance the performance of precipitation nowcasting based on the optical flow technique. To reduce the error of optical flow-based nowcasting, we formulated a linear regression model by linearly summing the nowcasting results obtained by multiple optical flow algorithms, using various sparse and dense optical flow algorithms provided by the public OpenCV libraries. In particular, the optical flow fields produced by the various algorithms exhibit different spatial characteristics such as flow speed, flow angle, and flow spatial scales (see section 2 for more details). The linear regression model extracts the features from various optical flow models and minimizes the error between nowcasting and ground truth. Additionally, using a U-Net based network, we can extract the features of nonlinear motion that cannot be captured through linear extrapolation using the optical flow field. Notably, the video interpolation technique using optical flow and a deep learning network has been studied for capturing nonlinear motion of video frames (e.g., Jiang et al. 2018), and such techniques have been adopted in atmospheric science. For example, future frame generation using geostationary satellite datasets for tracking cloud movement has previously been examined (e.g., Vandal and Nemani 2021; Seo et al. 2022). In our model, the performance of capturing nonlinear motions using a deep learning network could be applicable to track the nonlinear motion of precipitation fields.
The paper is organized as follows: section 2 summarizes the dataset used in this study, various algorithms for optical flow estimation, and their characteristics. Section 3 describes the model structure used in this study, including a regression model for generating input data and U-Net architecture for training the features of the nonlinear evolution of precipitation fields. The deep neural network prediction results are also reported as follows. Section 4 presents the discussion and summary of the findings.
2. Radar data and optical flow algorithms
a. Radar dataset and precipitation classification
Weather radars are useful for estimating instantaneous rain rates. Typical operating resolutions of weather radars are 1–5 min and 0.1–1 km for the X-band, 5–10 min and 0.25–2 km for the C-band, and 10–15 min and 1–4 km for the S-band, respectively (e.g., Thorndahl et al. 2017). KMA operates the S-band dual-polarization radar. In this study, we used the hybrid surface rainfall (HSR) radar reflectivity data produced by KMA. The HSR method is characterized by the synthesis of reflectivity at the hybrid surface that is unaffected by ground clutter, beam blockage, nonmeteorological echoes, and bright band (e.g., Kwon et al. 2015; Lyu et al. 2015, 2017). The spatial and temporal resolutions of the radar reflectivity data are 0.5 km and 10 min, respectively. Precipitation (R) was calculated using the radar reflectivity factor (Z) through the Z–R relation (Z = 148R1.59), which has been derived by the two-dimensional video disdrometer (Kim et al. 2016) and currently employed for operational purposes in the KMA weather radar center.
The classification of heavy rainfall events with 30 mm h−1 or more in South Korea has been examined using the deep learning–based method, such as the self-organizing map (SOM) (Jo et al. 2020). For instance, Jo et al. (2020) used 1221 heavy rainfall events that occurred in summer during a 13-yr period for classification. According to their findings, classification using the SOM method is consistent with the regional characteristics of heavy rainfalls in South Korea. The heavy rainfall events over South Korea are concentrated toward the western half of the Korean peninsula owing to the mountain ranges in the eastern half of the Korean Peninsula and eastward-moving nature of heavy rainfall due to the westerly jet. Such rainfall events can be further classified into rainfall events occurring in the central regions and those occurring in the southern regions of South Korea because of the structure of mountain ranges existing in South Korea.
In this study, we follow regional classification based on the aforementioned SOM method. The rainfall types can be classified into three types: 1) central regions of South Korea (hereafter referred to as the Central case); 2) southern regions of South Korea (hereafter referred to as the Southern case); 3) isolated rainfall events (hereafter referred to as the isolated case), which can occur throughout South Korea. According to the Fourier analysis, their spatial scales are typically in the range of 10–100 km. In addition to the three types, we considered the heavy rainfall events occurring in the vicinity of Jeju island, south of the South Korean mainland (hereafter referred to as the Jeju case). Hence, the heavy rainfall events were categorized into four different types (i.e., Central case, Southern case, Isolated case, and Jeju case).
Figure 1 shows the examples of precipitation images obtained by the four precipitation cases. The Central case is mainly influenced by the flow motion from west to east, whereas the Southern and Jeju cases are affected by the flow motions from south to north and from west to east. Heavy rainfall passing through the mainland areas accounts for a significant portion of the Central and Southern cases, whereas in the Jeju case, heavy rainfall mainly passes through the coastal areas, rendering the effects of precipitation generation owing to the inflow of water vapor more prominent than those in the Central and Southern cases. The rainfall events in the local region are categorized separately as a group, namely, the Isolated case. These localized rainfalls grow and decay rapidly compared with the other precipitation types.
b. Optical flow estimation
1) Dense pyramid Lucas–Kanade algorithm
The Lucas–Kanade (LK) method (Lucas and Kanade 1981) calculates optical flow based on a local motion constancy, where nearby pixels have the same displacement direction. However, the LK method has a limitation when large-scale motion dominates the image. To resolve this problem, the pyramid Lucas–Kanade (PLK) method (Bouguet 2000) iteratively runs LK using images with different sizes produced by the original image, capturing relatively larger-scale motions than those captured by LK. A dense optical flow field can be obtained using the sparse-to-dense method.
2) Robust local optical flow algorithm
The robust local optical flow (RLOF) method (Senst et al. 2012) computes sparse optical flow by considering illumination changes, particularly those in radar images that occur when precipitation develops or disappears. RLOF might contribute to capturing the development or disappearance of precipitation in rainfall forecasts using optical flow.
3) Optical flow algorithm through principal component analysis (pca-flow)
In this algorithm, the dense optical flow field is assumed to be a weighted sum over a relatively small number of basis flow fields (Wulff and Black 2015). The sparse vectors are first computed, and the coefficients for the dense optical flow field are obtained by regression using the sparse feature matches.
4) Farnebäck’s algorithm
Farnebäck’s method (Farnebäck 2003) approximates the windows of image frames using quadratic polynomials. Subsequently, the displacement field between two local intensities can be defined by the coefficients of the polynomials. The optical flow calculated through this algorithm is the flow of a slowly fluctuating displacement field.
5) Total variation-L1 algorithm
The total variation-L1 algorithm (Wedel et al. 2009) determines the optical flow by minimizing the regularization force that includes two terms. The first term is the optical flow constraint, which assumes constant brightness during motion, and the second term represents the smoothness of the displacement fields.
6) Deepflow algorithm
The Deepflow algorithm (Weinzaepfel et al. 2013) calculates the optical flow by minimizing the regularization force that includes three terms. The first term (data term) is a constraint that assumes constant brightness and gradient. The second term (smoothness term) represents the smoothness of the displacement fields, and the third term (matching term) pertains to the difference between the vector and precomputed vector fields.
c. Comparison of optical flow algorithms
Depending on the algorithm used for computing optical flow, the resulting optical flow field may have different properties regarding the multispatial scale motions in terms of the length and flow angle. To understand the characteristics of aforementioned multiple optical flow algorithms, we investigated the effects of these differences on precipitation predictions. For our analysis, we employed the OpenCV library (Bradski 2000) to estimate optical flow.
Figure 2 shows the optical flow images estimated by various algorithms listed in section 2b for the Jeju case as an example. The flow motions corresponding to precipitation are satisfactorily captured by all algorithms, but we found that the detailed fine-structure depends on the algorithm used. For example, the optical flow field produced by the TV-L1 algorithm shows the best results for capturing small-scale fluctuations, whereas the field produced by Deepflow mainly includes the bulk translation motions.
The statistical comparisons for all algorithms are shown in Fig. 3, using the Jeju case as an example, as shown in Fig. 2. Several points were noted, as follows. 1) According to the PDF of Deepflow for flow velocity (Fig. 3b) and flow angle (Fig. 3c), the flow velocity is smaller and the flow angle range is narrower compared with those in other algorithms. Such a smooth flow field has benefits to track the bulk motions toward the eastern direction (θopt ∼ [−30°, 0°]) and those toward the northeastern direction (θopt > 30°). Moreover, according to PSD, the motions with the scale of
Based on the structure analysis through the PSD of the optical flow fields, we found that the flow properties in multispatial scales strongly depend on the detailed scheme of the algorithm. In the following section, we describe a regression model to more accurately track the multispatial features.
3. Precipitation nowcasting using multiple linear regression based on a multiple optical flow algorithm and U-Net convolutional neural network
In this section, we describe the precipitation nowcasting model, including 1) multiple linear regression to reduce the error between linear extrapolated future frame and the ground truth, and 2) a U-Net-based network for capturing nonlinear features that cannot be captured by the linear extrapolation using optical flow. The overview of our model structure is illustrated in Fig. 4.
a. Multiple linear regression
As demonstrated by the statistics shown in Fig. 3, the weather prediction’s frame warp through optical flows can heavily rely on the choice of optical flow algorithms. To more precisely capture the evolution of multispatial and multitemporal motions, we propose a linear regression-based method that can determine the relative significance of the optical flow algorithms listed in section 2, thereby enabling the generation of improved forecasts using optical flow warping.
The process of the regression stage can be summarized in three steps: 1) calculation of optical flow (Vopt,m1, Vopt,m2, …, Vopt,m6) and future frame production (If,m1, If,m2, …, If,m6) using optical flow algorithms listed in the section 2b; 2) linear regression using the gradient descent method; and 3) frame interpolation using the coefficient.
Time evolution of the coefficients ωj is displayed in Fig. 5. Notably, the number of epochs is the number of steps required to minimize the error Jreg, and the initial coefficients are the same for all algorithms: ωPLK,0 = ωTV−L1,0 = ωDeepflow,0 = ωPCA,0 = ωFarneback,0 = ωRLOF,0. Especially, for all cases, the optical flow fields mainly describing the bulk motion (Deepflow method) and optical flow fields for a wide range of velocity and angle (Farnebäck’s method) have larger weighting factors compared with those of the other algorithms. Moreover, the ratio of weighting factors between PCA (or TV-L1) and RLOF depends on the precipitation cases, and such algorithms could contribute to describing the warping results produced by small-scale motions at
b. Structure of the deep learning network
Training nonlinear movement through a deep learning–based model using multitemporal input layer has recently been proposed (e.g., Seo et al. 2022). In this study, we followed the approach proposed in such studies and adopted a deep learning model based on U-Net to train nonlinear motion in precipitation events. The model consists of a contracting path (encoding stage) and an expanding path (decoding stage), as illustrated in Fig. 6. In the contracting and expanding paths, four downsampling and four upsampling steps are included, respectively. Here, we stacked radar images in three time steps, It−10, It, and
We trained and evaluated the U-Net-based model using a precipitation image to evaluate the future frame prediction performance. The HSR dataset from 2018, 2019, and 2021 was used for training, and that from 2022 and 2020 was used for validation and test datasets, respectively (the total sample numbers of training, validation, and test datasets are 157 680, 52 560, and 52 560, respectively). Considering augmentation, we first cropped the central region of the initial image to consider HSR data with a size of 480 km × 480 km (960 × 960 pixels) and used random horizontal flip, random vertical flip, and random rotate 90° augmentation. All experiments were conducted on NVIDIA A100 GPU × 8 and roughly 120 h to train the model. To set the hyperparameters for training, we conducted a set of test experiments within a parameter range, i.e., initial learning rate = 10−5–10−4 and batch size = 8–64. Adaptive moment estimation (Adam) was employed as the optimizer and learning rate scheduler, StepLR, was used for efficient training. The dependence on the parameters for StepLR scheduler, such as step size of 10–30 epochs and gamma of 0.1–0.5, was also examined (i.e., the learning rate decreases the gamma times at each step size). We evaluated the learning curve (loss versus epoch) of the training and validation sets to prevent overfitting. The best hyperparameter set in terms of the lowest lr is summarized as follows: the number of epochs = 100, initial learning rate = 10−4, batch size = 8, step size = 15 epochs, and gamma = 0.5.
Note that the model produces future predictions recursively. For instance, the model receives the images at −10, 0 and +10 min as input to predict +20 min. The model then receives the images at 0, +10 and +20 min as input to predict +30 min. By repeating this process, we obtained the nowcasting outputs at 0–3-h lead times.
c. Results
We first compared the performances of the models with and without U-Net. According to the RMSE and CSI values for the all test dataset shown in Fig. 7, the U-Net significantly contributes to enhance the model performance at 0–3-h lead times. We then compared the errors of the deep learning model including multiple linear regression and those using only a single optical flow algorithm. Hereafter, the regression model indicates the deep learning model using a multiple linear regression and U-Net, whereas the deep learning model using a single optical flow algorithm and U-Net refers as the single model.
In Fig. 8, we compared the RMSE values (top panels) measured by the regression model and single model adopting six different algorithms. A total of 10 samples were randomly chosen for each precipitation type to evaluate the performance (total number of samples is 40). We confirmed that the improvement lasts up to a 3-h lead time. The regression model outperforms all the other models using a single optical flow algorithm. Compared with the results obtained by Deepflow, those of the regression model were better in terms of the RMSE values owing to the contribution of small-scale flow motions at
Subsequently, we report the results of precipitation nowcasting, focusing on the differences in performance according to the precipitation types. Figure 9 illustrates the examples of nowcasting outputs at a 1.5-h lead time. Optical flow techniques provide accurate predictions for precipitation passing over the mainland owing to their ability to capture the motion vectors of precipitation fields. The Central and Southern cases represent around 50% of all precipitation types in South Korea (Jo et al. 2020), indicating that the regression model proposed in this study will be highly useful for precipitation nowcasting during heavy rainfall periods from June to September in the Korean Peninsula. Conversely, for the Jeju and Isolated cases, the development or decay of precipitation cannot be captured, resulting in poor accuracy. For instance, in the Jeju case, heavy rainfall occurred on Jeju Island; however, the nowcasting predicted that the heavy rainfall was located outside the island.
Figure 10 displays the CSI, POD, and FAR values of four precipitation types for MREs (left panels) and SREs (right panels). As specified previously, each precipitation type includes 10 samples. A few points were noted. 1) For SREs, the performance of the Central and Southern cases is generally better than that of the Jeju case. The CSI and POD of the Central and Southern cases are higher than those of the Jeju case. Although the inflow of water vapor significantly affects the precipitation development, which the optical flow-based nowcasting cannot consider, such effects are not included in the nowcasting results by the advection of precipitation at the previous time step by an optical flow field. Thus, the nowcasting results of the Jeju case are generally underestimated. Notably, the FAR at 2–3-h lead times is smaller than those of the Central and Southern cases. 2) In the case of MREs, the CSI score of the Jeju case is slightly higher than those of the other types because of the underestimation of precipitation (i.e., the smallest FAR score). 3) The performance for the Isolated case, measured by all scores, is poorer than that of other types. This is partially because the pointwise verification shows higher FAR and lower POD values when evaluating the precipitation fields with smaller spatial scales. Moreover, the localized precipitation fields undergo rapid development or decay, making them difficult to track using the flow motion alone.
4. Summary and discussion
In this study, we developed a regression model using multiple optical flow algorithms to improve the performance of precipitation nowcasting based on the optical flow technique using multiple optical flow algorithms. We first compared the results produced by currently available algorithms. We found that the results of PDF and PSD of the optical flow field strongly depend on the selected algorithm. In particular, the motions with the scales,
We further discuss the feasibility of precipitation nowcasting using optical flow, compared to the currently reported U-Net-based CNN model (e.g., Oh et al. 2023). In particular, for strong rainfall events (≥10 mm h−1), the CSI score of the optical-flow-based regression model at 0–3-h lead times is ∼0.2, and this score is comparable with the results obtained by the U-Net based CNN model trained by the dataset of the Korean Peninsula (Oh et al. 2023). Moreover, the nowcasting produced by the optical-flow-based regression model is less blurry compared with the U-Net-produced nowcasting. According to the PSD of nowcasting outputs produced by the optical-flow-based regression model, the smallest scale resolved in the output is approximately ∼5 km, whereas the effective resolution of the U-Net model is approximately ∼O(10) km (e.g., Ayzel et al. 2020; Ravuri et al. 2021).
Here, we summarize the limitations and future studies. Although the proposed model captures nonlinear motions through a deep learning network, resulting in more accurate motion fields of rainfall regions, it still slightly overestimates the motions of precipitation fields in the cases of rapidly developing and/or dissipating precipitation. In subsequent studies, to overcome this limitation, we will refine the proposed model of this study and present a better model by examining the characteristics of multitemporal motions present in the precipitation fields.
Acknowledgments.
The authors thank the anonymous referees for their constructive comments and suggestions that substantially improved the manuscript. This work was funded by the KMA Research and Development program “Developing AI technology for weather forecasting” under Grant KMA 2021-00121.
Data availability statement.
The radar data are freely available at the released website of the Korea Meteorological Administration (KMA) data (https://data.kma.go.kr/cmmn/main.do). The source code can be obtained from the GitHub repository (https://github.com/JHHa223/Optflow_code/tree/main).
REFERENCES
Agrawal, S., L. Barrington, C. Bromberg, J. Burge, C. Gazen, and J. Hickey, 2019: Machine learning for precipitation nowcasting from radar images. arXiv, 1912.12132v1, https://arxiv.org/abs/1912.12132.
Ayzel, G., M. Heistermann, and T. Winterrath, 2019: Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1). Geosci. Model Dev., 12, 1387–1402, https://doi.org/10.5194/gmd-12-1387-2019.
Ayzel, G., T. Scheffer, and M. Heistermann, 2020: RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev., 13, 2631–2644, https://doi.org/10.5194/gmd-13-2631-2020.
Bechini, R., and V. Chandrasekar, 2017: An enhanced optical flow technique for radar nowcasting of precipitation and winds. J. Atmos. Oceanic Technol., 34, 2637–2658, https://doi.org/10.1175/JTECH-D-17-0110.1.
Bouguet, J.-Y., 2000: Pyramidal implementation of the Affine Lucas Kanade feature tracker description of the algorithm. Intel Corporation Microprocessor Research Lab Tech. Rep., 10 pp., http://robots.stanford.edu/cs223b04/algo_affine_tracking.pdf.
Bowler, N. E. H., C. E. Pierce, and A. Seed, 2004: Development of a precipitation nowcasting algorithm based upon optical flow techniques. J. Hydrol., 288, 74–91, https://doi.org/10.1016/j.jhydrol.2003.11.011.
Bradski, G., 2000: The OpenCV library. Dr. Dobb’s J. Software Tools, 25, 120–123.
Bridson, R., 2008: Fluid Simulation for Computer Graphics. Taylor & Francis, 246 pp.
Brox, T., A. Bruhn, N. Papenberg, and J. Weickert, 2004: High accuracy optical flow estimation based on a theory for warping. Computer Vision—ECCV 2004, T. Pajdla and J. Matas, Eds., Lecture Notes in Computer Science, Vol. 3024, Springer, 25–36.
Chase, R. J., D. R. Harrison, G. M. Lackmann, and A. McGovern, 2023: A machine learning tutorial for operational meteorology. Part II: Neural networks and deep learning. Wea. Forecasting, 38, 1271–1293, https://doi.org/10.1175/WAF-D-22-0187.1.
Espeholt, L., and Coauthors, 2022: Deep learning for twelve hour precipitation forecasts. Nat. Commun., 13, 5145, https://doi.org/10.1038/s41467-022-32483-x.
Farnebäck, G., 2003: Two-frame motion estimation based on polynomial expansion. Image Analysis SCIA 2003, J. Bigun and T. Gustavsson, Eds., Lecture Notes in Computer Science, Vol. 2749, Springer, 363–370, https://doi.org/10.1007/3-540-45103-X_50.
Germann, U., and I. Zawadzki, 2002: Scale-dependence of the predictability of precipitation from continental radar images. Part I: Description of the methodology. Mon. Wea. Rev., 130, 2859–2873, https://doi.org/10.1175/1520-0493(2002)130<2859:SDOTPO>2.0.CO;2.
Germann, U., and I. Zawadzki, 2004: Scale dependence of the predictability of precipitation from continental radar images. Part II: Probability forecasts. J. Appl. Meteor., 43, 74–89, https://doi.org/10.1175/1520-0450(2004)043<0074:SDOTPO>2.0.CO;2.
Germann, U., I. Zawadzki, and B. Turner, 2006: Predictability of precipitation from continental radar images. Part IV: Limits to prediction. J. Atmos. Sci., 63, 2092–2108, https://doi.org/10.1175/JAS3735.1.
Goodfellow, I. J., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014: Generative adversarial nets. Advances in Neural Information Processing Systems, Z. Ghahramani, Ed., Vol. 27, Curran Associates, Inc., 2672–2680.
Jiang, H., D. Sun, V. Jampani, M.-H. Yang, E. Learned-Miller, and J. Kautz, 2018: Super SloMo: High quality estimation of multiple intermediate frames for video interpolation. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, Institute of Electrical and Electronics Engineers, 9000–9008, https://doi.org/10.1109/CVPR.2018.00938.
Jo, E., C. Park, S.-W. Son, J.-W. Roh, G.-W. Lee, and Y.-H. Lee, 2020: Classification of localized heavy rainfall events in South Korea. Asia-Pac. J. Atmos. Sci., 56, 77–88, https://doi.org/10.1007/s13143-019-00128-7.
Kim, H.-L., M.-K. Suk, H.-S. Park, G.-W. Lee, and J.-S. Ko, 2016: Dual-polarization radar rainfall estimation in Korea according to raindrop shapes obtained by using a 2-D video disdrometer. Atmos. Meas. Tech., 9, 3863–3878, https://doi.org/10.5194/amt-9-3863-2016.
KMA, 2018: Meteorological disaster statistics. Accessed 1 September 2021, http://www.weather.go.kr/weather/lifenindustry/disaster_02.jsp.
KMA, 2021: Abnormal Climate Report 2020 (in Korean). KMA Tech. Rep., 212 pp., http://www.climate.go.kr/home/bbs/view.php?code=93&bname=abnormal&vcode=6494.
Ko, J., K. Lee, H. Hwang, S.-G. Oh, S.-W. Son, and K. Shin, 2022: Effective training strategies for deep-learning-based precipitation nowcasting and estimation. Comput. Geosci., 161, 105072, https://doi.org/10.1016/j.cageo.2022.105072.
Kwon, S., S.-H. Jung, and G. Lee, 2015: Inter-comparison of radar rainfall rate using constant altitude plan position indicator and hybrid surface rainfall maps. J. Hydrol., 531, 234–247, https://doi.org/10.1016/j.jhydrol.2015.08.063.
Lee, H. C., Y. H. Lee, J.-C. Ha, D.-E. Chang, A. Bellon, I. Zawadzki, and G. Lee, 2010: McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE) applied to the South Korean radar network. Part II: Real-time verification for the summer season. Asia-Pac. J. Atmos. Sci., 46, 383–391, https://doi.org/10.1007/s13143-010-1009-9.
Lucas, B. D., and T. Kanade, 1981: An iterative image registration technique with an application to stereo vision. Proc. Seventh Int. Joint Conf. on Artificial Intelligence (IJCAI), Vancouver, British Columbia, Canada, 674–679, https://www.ri.cmu.edu/pub_files/pub3/lucas_bruce_d_1981_1/lucas_bruce_d_1981_1.pdf.
Lyu, G., S.-H. Jung, K.-Y. Nam, S. Kwon, C.-R. Lee, and G. Lee, 2015: Improvement of radar rainfall estimation using radar reflectivity data from the hybrid lowest elevation angels. J. Korean Earth Sci. Soc., 36, 109–124, https://doi.org/10.5467/JKESS.2015.36.1.109.
Lyu, G., S.-H. Jung, Y. Oh, H.-M. Park, and G. Lee, 2017: Accuracy evaluation of composite Hybrid Surface Rainfall (HSR) using KMA weather radar network. J. Korean Earth Sci. Soc., 38, 496–510, https://doi.org/10.5467/JKESS.2017.38.7.496.
Oh, S.-G., C. Park, S.-W. Son, J. Ko, K. Shin, S. Kim, and J. Park, 2023: Evaluation of deep-learning-based very short-term rainfall forecasts in South Korea. Asia-Pac. J. Atmos. Sci., 59, 239–255, https://doi.org/10.1007/s13143-022-00310-4.
Park, C., and Coauthors, 2021: Record-breaking summer rainfall in South Korea in 2020: Synoptic characteristics and the role of large-scale circulations. Mon. Wea. Rev., 149, 3085–3100, https://doi.org/10.1175/MWR-D-21-0051.1.
Pulkkinen, S., D. Nerini, A. A. Pérez Hortal, C. Velasco-Forero, A. Seed, U. Germann, and L. Foresti, 2019: Pysteps: An open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev., 12, 4185–4219, https://doi.org/10.5194/gmd-12-4185-2019.
Ravuri, S., and Coauthors, 2021: Skilful precipitation nowcasting using deep generative models of radar. Nature, 597, 672–677, https://doi.org/10.1038/s41586-021-03854-z.
Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Lecture Notes in Computer Science, Vol. 9351, Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.
Seed, A. W., C. E. Pierce, and K. Norman, 2013: Formulation and evaluation of a scale decomposition-based stochastic precipitation nowcast scheme. Water Resour. Res., 49, 6624–6641, https://doi.org/10.1002/wrcr.20536.
Senst, T., V. Eiselein, and T. Sikora, 2012: Robust local optical flow for feature tracking. IEEE Trans. Circ. Syst. Video Tech., 22, 1377–1387, https://doi.org/10.1109/TCSVT.2012.2202070.
Seo, M., Y. Choi, H. Ryu, H. Park, H. Bae, H. Lee, and W. Seo, 2022: Intermediate and future frame prediction of geostationary satellite imagery with warp and refine network. AAAI 2022 Fall Symp.: The Role of AI in Responding to Climate Challenges, Arlington, VA, 5 pp., https://www.climatechange.ai/papers/aaaifss2022/25.
Thorndahl, S., T. Einfalt, P. Willems, J. E. Nielsen, M.-C. ten Veldhuis, K. Arnbjerg-Nielsen, M. R. Rasmussen, and P. Molnar, 2017: Weather radar rainfall data in urban hydrology. Hydrol. Earth Syst. Sci., 21, 1359–1380, https://doi.org/10.5194/hess-21-1359-2017.
Turner, B. J., I. Zawadzki, and U. Germann, 2004: Predictability of precipitation from continental radar images. Part III: Operational nowcasting implementation (MAPLE). J. Appl. Meteor., 43, 231–248, https://doi.org/10.1175/1520-0450(2004)043<0231:POPFCR>2.0.CO;2.
Vandal, T. J., and R. R. Nemani, 2021: Temporal interpolation of geostationary satellite imagery with optical flow. IEEE Trans. Neural Netw. Learn. Syst., 34, 3245–3254, https://doi.org/10.1109/TNNLS.2021.3101742.
Wedel, A., T. Pock, C. Zach, H. Bischof, and D. Cremers, 2009: An improved algorithm for TV-L1 optical flow. Statistical and Geometrical Approaches to Visual Motion Analysis, D. Cremers et al., Eds., Lecture Notes in Computer Science, Vol. 5604, Springer, 23–45, https://doi.org/10.1007/978-3-642-03061-1_2.
Weinzaepfel, P., J. Revaud, Z. Harchaoui, and C. Schmid, 2013: DeepFlow: Large displacement optical flow with deep matching. 2013 IEEE Int. Conf. on Computer Vision, Sydney, New South Wales, Australia, Institute of Electrical and Electronics Engineers, 1385–1392, https://doi.org/10.1109/ICCV.2013.175.
Wulff, J., and M. J. Black, 2015: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Institute of Electrical and Electronics Engineers, 120–130, https://doi.org/10.1109/CVPR.2015.7298607.
Zhang, Y., M. Long, K. Chen, L. Xing, R. Jin, M. I. Jordan, and J. Wang, 2023: Skilful nowcasting of extreme precipitation with NowcastNet. Nature, 619, 526–532, https://doi.org/10.1038/s41586-023-06184-4.