A Deep Learning Model for Precipitation Nowcasting Using Multiple Optical Flow Algorithms

Ji-Hoon Ha aNational Institute of Meteorological Sciences, Jeju, South Korea

Search for other papers by Ji-Hoon Ha in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0001-7670-4897
and
Hyesook Lee aNational Institute of Meteorological Sciences, Jeju, South Korea

Search for other papers by Hyesook Lee in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

The optical flow technique has advantages in motion tracking and has long been employed in precipitation nowcasting to track the motion of precipitation fields using ground radar datasets. However, the performance and forecast time scale of models based on optical flow are limited. Here, we present the results of the application of the deep learning method to optical flow estimation to extend its forecast time scale and enhance the performance of nowcasting. It is shown that a deep learning model can better capture both multispatial and multitemporal motions of precipitation events compared with traditional optical flow estimation methods. The model comprises two components: 1) a regression process based on multiple optical flow algorithms, which more accurately captures multispatial features compared with a single optical flow algorithm; and 2) a U-Net-based network that trains multitemporal features of precipitation movement. We evaluated the model performance with cases of precipitation in South Korea. In particular, the regression process minimizes errors by combining multiple optical flow algorithms with a gradient descent method and outperforms other models using only a single optical flow algorithm up to a 3-h lead time. Additionally, the U-Net plays a crucial role in capturing nonlinear motion that cannot be captured by a simple advection model through traditional optical flow estimation. Consequently, we suggest that the proposed optical flow estimation method with deep learning could play a significant role in improving the performance of current operational nowcasting models, which are based on traditional optical flow methods.

Significance Statement

The purpose of this study is to improve the accuracy of short-term rainfall prediction based on optical flow methods that have been employed for operational precipitation nowcasting. By utilizing open-source libraries, such as OpenCV, and commonly applied machine learning techniques, such as multiple linear regression and U-Net networks, we propose an accessible model for enhancing prediction accuracy. We expect that the improvement in prediction accuracy will significantly improve the practical application of operational precipitation nowcasting.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Ji-Hoon Ha, jhha223@korea.kr

Abstract

The optical flow technique has advantages in motion tracking and has long been employed in precipitation nowcasting to track the motion of precipitation fields using ground radar datasets. However, the performance and forecast time scale of models based on optical flow are limited. Here, we present the results of the application of the deep learning method to optical flow estimation to extend its forecast time scale and enhance the performance of nowcasting. It is shown that a deep learning model can better capture both multispatial and multitemporal motions of precipitation events compared with traditional optical flow estimation methods. The model comprises two components: 1) a regression process based on multiple optical flow algorithms, which more accurately captures multispatial features compared with a single optical flow algorithm; and 2) a U-Net-based network that trains multitemporal features of precipitation movement. We evaluated the model performance with cases of precipitation in South Korea. In particular, the regression process minimizes errors by combining multiple optical flow algorithms with a gradient descent method and outperforms other models using only a single optical flow algorithm up to a 3-h lead time. Additionally, the U-Net plays a crucial role in capturing nonlinear motion that cannot be captured by a simple advection model through traditional optical flow estimation. Consequently, we suggest that the proposed optical flow estimation method with deep learning could play a significant role in improving the performance of current operational nowcasting models, which are based on traditional optical flow methods.

Significance Statement

The purpose of this study is to improve the accuracy of short-term rainfall prediction based on optical flow methods that have been employed for operational precipitation nowcasting. By utilizing open-source libraries, such as OpenCV, and commonly applied machine learning techniques, such as multiple linear regression and U-Net networks, we propose an accessible model for enhancing prediction accuracy. We expect that the improvement in prediction accuracy will significantly improve the practical application of operational precipitation nowcasting.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Ji-Hoon Ha, jhha223@korea.kr

1. Introduction

Precipitation nowcasting has been a long-standing challenge in the field of weather forecasting as it plays a vital role in ensuring the well-being of human life and economy, particularly in sectors that heavily rely on accurate weather information. For example, owing to the occurrence of destructive floods in South Korea, the importance of precise rainfall predictions was highlighted (KMA 2018, 2021; Park et al. 2021). Additionally, accurate precipitation nowcasting helps drivers by predicting road conditions and enhances flight safety by providing weather guidance for regional aviation. Therefore, improving the accuracy of precipitation nowcasting models is necessary.

As precipitation events evolve nonlinearly by undergoing dynamic processes, such as the growth or decay of precipitation fields, novel deep learning-based approaches have been widely employed for precipitation nowcasting. In particular, a deep learning model based on the U-Net (e.g., Ronneberger et al. 2015) convolutional neural network (CNN) has been proposed by various studies (e.g., Agrawal et al. 2019; Ayzel et al. 2020; Ko et al. 2022; Oh et al. 2023). The U-Net is the architecture, used for image-to-image translation. Regarding precipitation nowcasting, image-to-image translation is an example of taking the precipitation image in the past time step and translating it into a future precipitation image (see Chase et al. (2023) for more details about a machine learning tutorial for operational meteorology). The U-Net comprises an encoder and a decoder. The encoder downsamples the input image to capture context and extract the features, whereas the decoder upsamples this information to generate an output. For precipitation nowcasting, the U-Net model comprises an encoder part to receive multitemporal images for extracting the features based on the nonlinear evolution of precipitation events and a decoder part for future image prediction. For instance, Oh et al. (2023) used the U-Net-based model to predict moderate rainfall events (MREs; ≥1 mm h−1) and strong rainfall events (SREs, ≥10 mm h−1) well with critical success indices (CSIs) of 0.6 and 0.4, respectively, at a 1-h lead time. However, the blurry effect of the U-Net-based model makes it less useful for forecasters (e.g., Ayzel et al. 2020; Ravuri et al. 2021). To preserve the spatial resolution, a generative model (e.g., Goodfellow et al. 2014) has been used (e.g., Ravuri et al. 2021). The generative model extracts the distributions of training datasets and generates predictions based on the trained distributions. While the spatial resolution of this model is significantly improved compared with that of U-Net-based models, both its accuracy and prediction times were limited. To enhance prediction times and accuracy, several additional model architectures have been proposed. For instance, Espeholt et al. (2022) implemented convolutional long short-term memory and reported improved precipitation nowcasting results within the 0–12-h lead times. Furthermore, the accuracy and prediction times of generative models can be enhanced by rigorously considering the motion of precipitation fields through the continuity equation for fluids (Zhang et al. 2023).

Linear extrapolation through the motion of precipitation fields is also a potential candidate for precipitation nowcasting, which have been extensively studied so far. A simple method to obtain precipitation nowcasting results is by examining the Eulerian and Lagrangian persistence of radar precipitation images (e.g., Germann and Zawadzki 2002, 2004; Germann et al. 2006; Turner et al. 2004). In Eulerian persistence, the radar image is assumed to be frozen in the time domain (i.e., at any time, the forecast obtained by Eulerian persistence is the same). It has powerful performance for very short lead times; however, the performance rapidly decreases with increasing lead time owing to the lack of the motion of precipitation fields. Lagrangian persistence provides a forecast by advecting precipitation fields through the field of the radar echo motion. To calculate the motion of precipitation fields, the optical flow technique has widely been used to calculate a vector field of motion between two radar images (e.g., Bridson 2008; Brox et al. 2004; Bowler et al. 2004; Seed et al. 2013; Bechini and Chandrasekar 2017; Ayzel et al. 2019; Pulkkinen et al. 2019). In particular, this simple approach has been used for operational precipitation nowcasting (e.g., Lee et al. 2010; Turner et al. 2004). For example, the Korea Meteorological Administration (KMA) provides forecasts with a lead time of up to 6-h using the McGill algorithm for precipitation nowcasting by Lagrangian extrapolation (MAPLE) (Lee et al. 2010). However, the radar echo tracking model has limitations, such as failure of capturing the nonlinear motions and evolution of precipitation fields.

The comparison between the optical flow model and the recently proposed U-Net model has also been conducted (Ayzel et al. 2020). Compared with the U-Net model, the optical flow model produces more realistic localized structures without blurry effects. Conversely, regarding the accuracy of forecasts, the performance of the U-Net model strongly depends on rainfall intensity. The accuracy of the optical flow model for MREs is worse than that of the U-Net model but the optical flow model can provide more precise forecasts for SREs. Thus, a model based on optical flow has the potential to be a competitive model if it can predict nonlinear motions of precipitation and/or newly developed precipitation.

In this study, we aimed to enhance the performance of precipitation nowcasting based on the optical flow technique. To reduce the error of optical flow-based nowcasting, we formulated a linear regression model by linearly summing the nowcasting results obtained by multiple optical flow algorithms, using various sparse and dense optical flow algorithms provided by the public OpenCV libraries. In particular, the optical flow fields produced by the various algorithms exhibit different spatial characteristics such as flow speed, flow angle, and flow spatial scales (see section 2 for more details). The linear regression model extracts the features from various optical flow models and minimizes the error between nowcasting and ground truth. Additionally, using a U-Net based network, we can extract the features of nonlinear motion that cannot be captured through linear extrapolation using the optical flow field. Notably, the video interpolation technique using optical flow and a deep learning network has been studied for capturing nonlinear motion of video frames (e.g., Jiang et al. 2018), and such techniques have been adopted in atmospheric science. For example, future frame generation using geostationary satellite datasets for tracking cloud movement has previously been examined (e.g., Vandal and Nemani 2021; Seo et al. 2022). In our model, the performance of capturing nonlinear motions using a deep learning network could be applicable to track the nonlinear motion of precipitation fields.

The paper is organized as follows: section 2 summarizes the dataset used in this study, various algorithms for optical flow estimation, and their characteristics. Section 3 describes the model structure used in this study, including a regression model for generating input data and U-Net architecture for training the features of the nonlinear evolution of precipitation fields. The deep neural network prediction results are also reported as follows. Section 4 presents the discussion and summary of the findings.

2. Radar data and optical flow algorithms

a. Radar dataset and precipitation classification

Weather radars are useful for estimating instantaneous rain rates. Typical operating resolutions of weather radars are 1–5 min and 0.1–1 km for the X-band, 5–10 min and 0.25–2 km for the C-band, and 10–15 min and 1–4 km for the S-band, respectively (e.g., Thorndahl et al. 2017). KMA operates the S-band dual-polarization radar. In this study, we used the hybrid surface rainfall (HSR) radar reflectivity data produced by KMA. The HSR method is characterized by the synthesis of reflectivity at the hybrid surface that is unaffected by ground clutter, beam blockage, nonmeteorological echoes, and bright band (e.g., Kwon et al. 2015; Lyu et al. 2015, 2017). The spatial and temporal resolutions of the radar reflectivity data are 0.5 km and 10 min, respectively. Precipitation (R) was calculated using the radar reflectivity factor (Z) through the ZR relation (Z = 148R1.59), which has been derived by the two-dimensional video disdrometer (Kim et al. 2016) and currently employed for operational purposes in the KMA weather radar center.

The classification of heavy rainfall events with 30 mm h−1 or more in South Korea has been examined using the deep learning–based method, such as the self-organizing map (SOM) (Jo et al. 2020). For instance, Jo et al. (2020) used 1221 heavy rainfall events that occurred in summer during a 13-yr period for classification. According to their findings, classification using the SOM method is consistent with the regional characteristics of heavy rainfalls in South Korea. The heavy rainfall events over South Korea are concentrated toward the western half of the Korean peninsula owing to the mountain ranges in the eastern half of the Korean Peninsula and eastward-moving nature of heavy rainfall due to the westerly jet. Such rainfall events can be further classified into rainfall events occurring in the central regions and those occurring in the southern regions of South Korea because of the structure of mountain ranges existing in South Korea.

In this study, we follow regional classification based on the aforementioned SOM method. The rainfall types can be classified into three types: 1) central regions of South Korea (hereafter referred to as the Central case); 2) southern regions of South Korea (hereafter referred to as the Southern case); 3) isolated rainfall events (hereafter referred to as the isolated case), which can occur throughout South Korea. According to the Fourier analysis, their spatial scales are typically in the range of 10–100 km. In addition to the three types, we considered the heavy rainfall events occurring in the vicinity of Jeju island, south of the South Korean mainland (hereafter referred to as the Jeju case). Hence, the heavy rainfall events were categorized into four different types (i.e., Central case, Southern case, Isolated case, and Jeju case).

Figure 1 shows the examples of precipitation images obtained by the four precipitation cases. The Central case is mainly influenced by the flow motion from west to east, whereas the Southern and Jeju cases are affected by the flow motions from south to north and from west to east. Heavy rainfall passing through the mainland areas accounts for a significant portion of the Central and Southern cases, whereas in the Jeju case, heavy rainfall mainly passes through the coastal areas, rendering the effects of precipitation generation owing to the inflow of water vapor more prominent than those in the Central and Southern cases. The rainfall events in the local region are categorized separately as a group, namely, the Isolated case. These localized rainfalls grow and decay rapidly compared with the other precipitation types.

Fig. 1.
Fig. 1.

Precipitation maps (mm h−1) for the four precipitation types. Data are obtained from 2000 UTC 28 Jul 2020 (Central case), 1800 UTC 31 Jul 2020 (Isolated case), 1500 UTC 7 Aug 2020 (Southern case), and 0500 UTC 27 Jul 2020 (Jeju case). While heavy rainfall pixels with 30 mm h−1 or more are represented in red, the maximum rainfall intensity for each case is as follows: Central case: 70 mm h−1, Isolated case: 103 mm h−1, Southern case: 64 mm h−1, and Jeju case: 86 mm h−1.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

b. Optical flow estimation

Optical flow provides a flow field representing the motion of pixels in two consecutive image frames taken at times t and t + Δt. Under the assumption that the pixel intensities of an object remain constant, the pixel intensity I(x + Δx, y + Δy, t + Δt) can be expressed using a Taylor series:
I(x+Δx,y+Δy,t+Δt)I(x,y,t)(Ix)Δx+(Iy)Δy+(It)Δt,
when ignoring high-order terms. Dividing the above equation by Δt, we get
I(x+Δx,y+Δy,t+Δt)I(x,y,t)Δt(Ix)u+(Iy)υ+(It)0,(2),
where u and υ represent the basic elements of the vector field. As two unknown variables (u, υ) cannot be solved using only one equation, several methods have been developed to solve the equation. Optical flow algorithms can be classified into two types: sparse and dense methods. Sparse optical flow computes the motion vector for a specific set of objects in the image, whereas dense methods provide a motion field for every pixel in the image. The sparse-to-dense method provides a motion field for every pixel by interpolating the sparse algorithm output of the entire image. In the following, we summarize the dense optical flow calculation algorithms used in this study.

1) Dense pyramid Lucas–Kanade algorithm

The Lucas–Kanade (LK) method (Lucas and Kanade 1981) calculates optical flow based on a local motion constancy, where nearby pixels have the same displacement direction. However, the LK method has a limitation when large-scale motion dominates the image. To resolve this problem, the pyramid Lucas–Kanade (PLK) method (Bouguet 2000) iteratively runs LK using images with different sizes produced by the original image, capturing relatively larger-scale motions than those captured by LK. A dense optical flow field can be obtained using the sparse-to-dense method.

2) Robust local optical flow algorithm

The robust local optical flow (RLOF) method (Senst et al. 2012) computes sparse optical flow by considering illumination changes, particularly those in radar images that occur when precipitation develops or disappears. RLOF might contribute to capturing the development or disappearance of precipitation in rainfall forecasts using optical flow.

3) Optical flow algorithm through principal component analysis (pca-flow)

In this algorithm, the dense optical flow field is assumed to be a weighted sum over a relatively small number of basis flow fields (Wulff and Black 2015). The sparse vectors are first computed, and the coefficients for the dense optical flow field are obtained by regression using the sparse feature matches.

4) Farnebäck’s algorithm

Farnebäck’s method (Farnebäck 2003) approximates the windows of image frames using quadratic polynomials. Subsequently, the displacement field between two local intensities can be defined by the coefficients of the polynomials. The optical flow calculated through this algorithm is the flow of a slowly fluctuating displacement field.

5) Total variation-L1 algorithm

The total variation-L1 algorithm (Wedel et al. 2009) determines the optical flow by minimizing the regularization force that includes two terms. The first term is the optical flow constraint, which assumes constant brightness during motion, and the second term represents the smoothness of the displacement fields.

6) Deepflow algorithm

The Deepflow algorithm (Weinzaepfel et al. 2013) calculates the optical flow by minimizing the regularization force that includes three terms. The first term (data term) is a constraint that assumes constant brightness and gradient. The second term (smoothness term) represents the smoothness of the displacement fields, and the third term (matching term) pertains to the difference between the vector and precomputed vector fields.

c. Comparison of optical flow algorithms

Depending on the algorithm used for computing optical flow, the resulting optical flow field may have different properties regarding the multispatial scale motions in terms of the length and flow angle. To understand the characteristics of aforementioned multiple optical flow algorithms, we investigated the effects of these differences on precipitation predictions. For our analysis, we employed the OpenCV library (Bradski 2000) to estimate optical flow.

Figure 2 shows the optical flow images estimated by various algorithms listed in section 2b for the Jeju case as an example. The flow motions corresponding to precipitation are satisfactorily captured by all algorithms, but we found that the detailed fine-structure depends on the algorithm used. For example, the optical flow field produced by the TV-L1 algorithm shows the best results for capturing small-scale fluctuations, whereas the field produced by Deepflow mainly includes the bulk translation motions.

Fig. 2.
Fig. 2.

(top) Two precipitation frames with 10-min intervals are shown. Here, the case 0500 UTC 27 Jul 2020 is chosen as a representative case. (middle),(bottom) The optical flow speed, Vopt=U2+V2 (m s−1), estimated by six different algorithms.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

To examine the statistics of quantities, such as the flow velocity and angle representing the spatial characteristics of flow field, we first transformed the optical flow field from a Cartesian coordinate system to a polar coordinate system. In the Cartesian coordinate system, the optical flow field is represented as two flow components (U, V), whereas in the polar coordinate system, the optical flow field is represented as the flow speed and angle, (Vopt, θopt). Figure 3a shows the polar coordinate system for flow used in this study. We then computed the probabilistic distribution function (PDF) of the flow velocity and angle. Additionally, we calculated the radially averaged power spectral density (PSD) to analyze the scale of flow motions. Here, the PSD of optical flow, P(k) ≡ PU(k) + PV(k), is the sum of the PSDs of the two flow components in a Cartesian coordinate system (i.e., U, V), PU(k) and PV(k), defined as follows:
12U2=PU(k)2πkdk,
12V2=PV(k)2πkdk,
where k=kx2+ky2 represents the radial wavenumber.
Fig. 3.
Fig. 3.

(a) A polar coordinate system for the optical flow field. Here, flow angles, θopt = 0° and 180°, correspond to east and west directions, while flow angles, θopt = −90° and 90°, correspond to south and north directions, respectively. (b) The probabilistic distribution function (PDF) of flow speed. (c) The probabilistic distribution function (PDF) of flow angle. (d) The power spectral density (PSD) P(k) of flow speed.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

The statistical comparisons for all algorithms are shown in Fig. 3, using the Jeju case as an example, as shown in Fig. 2. Several points were noted, as follows. 1) According to the PDF of Deepflow for flow velocity (Fig. 3b) and flow angle (Fig. 3c), the flow velocity is smaller and the flow angle range is narrower compared with those in other algorithms. Such a smooth flow field has benefits to track the bulk motions toward the eastern direction (θopt ∼ [−30°, 0°]) and those toward the northeastern direction (θopt > 30°). Moreover, according to PSD, the motions with the scale of 2π/k<25km are poorly captured by the Deepflow as it includes more regulating terms compared with other algorithms. For instance, compared with the TV-L1 algorithm, Deepflow additionally assumes that the pixel intensity gradient is constant. 2) The PDFs of Farnebäck’s method for both the flow velocity and angle are broader than those of other algorithms, indicating its ability to capture turbulent flow motions presented in multispatial scales. Additionally, the PSD of Farnebäck’s method shows a cutoff scale of 2π/k ∼ 10 km because of the size of the window for polynomial expansion of the image intensity I when solving Eq. (2). 3) The PSD of TV-L1 is flatter than those of other algorithms, enabling it to capture small-scale motions, 2π/k < ∼ 10 km. 4) Although the ranges of flow velocity and angle captured by PCA-flow, RLOF, and Dense PLK are different, the motion scales captured by these algorithms are similar as indicated by their PSDs.

Based on the structure analysis through the PSD of the optical flow fields, we found that the flow properties in multispatial scales strongly depend on the detailed scheme of the algorithm. In the following section, we describe a regression model to more accurately track the multispatial features.

3. Precipitation nowcasting using multiple linear regression based on a multiple optical flow algorithm and U-Net convolutional neural network

In this section, we describe the precipitation nowcasting model, including 1) multiple linear regression to reduce the error between linear extrapolated future frame and the ground truth, and 2) a U-Net-based network for capturing nonlinear features that cannot be captured by the linear extrapolation using optical flow. The overview of our model structure is illustrated in Fig. 4.

Fig. 4.
Fig. 4.

An overview of a model for precipitation nowcasting. This model comprises a regression model and U-Net-based network. The regression model minimizes the error between the linearly extrapolated future frame and ground truth. The U-Net-based network trains time-sequence images and extracts the multitemporal features. The structure of the adopted U-Net is depicted in Fig. 6.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

a. Multiple linear regression

As demonstrated by the statistics shown in Fig. 3, the weather prediction’s frame warp through optical flows can heavily rely on the choice of optical flow algorithms. To more precisely capture the evolution of multispatial and multitemporal motions, we propose a linear regression-based method that can determine the relative significance of the optical flow algorithms listed in section 2, thereby enabling the generation of improved forecasts using optical flow warping.

The process of the regression stage can be summarized in three steps: 1) calculation of optical flow (Vopt,m1, Vopt,m2, …, Vopt,m6) and future frame production (If,m1, If,m2, …, If,m6) using optical flow algorithms listed in the section 2b; 2) linear regression using the gradient descent method; and 3) frame interpolation using the coefficient.

Using the optical flow field Vopt(.,.), the future frame If can be obtained through linear extrapolation:
If=g[It,Vopt(t,tΔt)],
where g(⋅) is a backward warping function. The unit of t is minutes, and Δt = 10 min throughout this paper.
For linear regression, the future frame (If˜) using If,m1, If,m2, …, If,m6 and the cost function (Jreg) are defined as follows:
If˜=j{m1,m2,,m6}ωjIf,j,
Jreg=12Nj=1N[Igt,f(xj,yj)If˜(xj,yj)]2.
Here, N is the number of pixels and Igt,f represents the ground truth at t. The coefficient ωj is iteratively updated through the gradient descent method:
ωj=ωjαJregωj,
where α is a free parameter.

Time evolution of the coefficients ωj is displayed in Fig. 5. Notably, the number of epochs is the number of steps required to minimize the error Jreg, and the initial coefficients are the same for all algorithms: ωPLK,0 = ωTV−L1,0 = ωDeepflow,0 = ωPCA,0 = ωFarneback,0 = ωRLOF,0. Especially, for all cases, the optical flow fields mainly describing the bulk motion (Deepflow method) and optical flow fields for a wide range of velocity and angle (Farnebäck’s method) have larger weighting factors compared with those of the other algorithms. Moreover, the ratio of weighting factors between PCA (or TV-L1) and RLOF depends on the precipitation cases, and such algorithms could contribute to describing the warping results produced by small-scale motions at 2π/k10km, which are lacking in the optical flow fields produced by the Deepflow and Farnebäck’s methods.

Fig. 5.
Fig. 5.

The time evolution of the coefficients ωj for the four different precipitation types shown in Fig. 1.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

b. Structure of the deep learning network

Training nonlinear movement through a deep learning–based model using multitemporal input layer has recently been proposed (e.g., Seo et al. 2022). In this study, we followed the approach proposed in such studies and adopted a deep learning model based on U-Net to train nonlinear motion in precipitation events. The model consists of a contracting path (encoding stage) and an expanding path (decoding stage), as illustrated in Fig. 6. In the contracting and expanding paths, four downsampling and four upsampling steps are included, respectively. Here, we stacked radar images in three time steps, It−10, It, and If˜, to construct input data containing multitemporal features. The loss function is defined as lr = ǁIgt,fIoutǁ1, where Iout and Igt,f are the output of the deep learning model and the ground truth measured at the time step for If, respectively. With this loss function, the deep learning model restores the input features to the ground truth, Igt,f.

Fig. 6.
Fig. 6.

Configuration of the U-Net model used in this study. The model contains four downsamplings and four upsamplings. Notably, the pixel number of input image is 960 × 960 with three channels (i.e., three different time steps, t − 10, t, and t + 10) and the pixel number of the output image is 960 × 960 with one channel (i.e., one time step, t + 10).

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

We trained and evaluated the U-Net-based model using a precipitation image to evaluate the future frame prediction performance. The HSR dataset from 2018, 2019, and 2021 was used for training, and that from 2022 and 2020 was used for validation and test datasets, respectively (the total sample numbers of training, validation, and test datasets are 157 680, 52 560, and 52 560, respectively). Considering augmentation, we first cropped the central region of the initial image to consider HSR data with a size of 480 km × 480 km (960 × 960 pixels) and used random horizontal flip, random vertical flip, and random rotate 90° augmentation. All experiments were conducted on NVIDIA A100 GPU × 8 and roughly 120 h to train the model. To set the hyperparameters for training, we conducted a set of test experiments within a parameter range, i.e., initial learning rate = 10−5–10−4 and batch size = 8–64. Adaptive moment estimation (Adam) was employed as the optimizer and learning rate scheduler, StepLR, was used for efficient training. The dependence on the parameters for StepLR scheduler, such as step size of 10–30 epochs and gamma of 0.1–0.5, was also examined (i.e., the learning rate decreases the gamma times at each step size). We evaluated the learning curve (loss versus epoch) of the training and validation sets to prevent overfitting. The best hyperparameter set in terms of the lowest lr is summarized as follows: the number of epochs = 100, initial learning rate = 10−4, batch size = 8, step size = 15 epochs, and gamma = 0.5.

Note that the model produces future predictions recursively. For instance, the model receives the images at −10, 0 and +10 min as input to predict +20 min. The model then receives the images at 0, +10 and +20 min as input to predict +30 min. By repeating this process, we obtained the nowcasting outputs at 0–3-h lead times.

c. Results

To evaluate the errors in rainfall prediction, the root-mean-squared error (RMSE) values were estimated using the following equation:
RMSE=j=1N[Igt,f(xj,yj)Iout(xj,yj)]2N.
To examine the accuracy of precipitation, we estimated the critical success index (CSI), probability of detection (POD), and false alarm rate (FAR). The CSI measures the fraction of correctly forecasted rainfall events to the total observed events, except for correct negatives:
CSI=hitshits+falsealarms+misses.
The POD is the fraction of correctly forecasted rainfall events to the observed events:
POD=hitshits+misses.
The FAR indicates the fraction of incorrectly forecasted rainfall events to all forecasted rainfall events:
FAR=falsealarmshits+falsealarms.
The perfect scores for CSI and POD are 1, and these values range from 0 to 1. FAR ranges from 0 to 1, and the perfect score is 0. For two classes of rainfall events (Oh et al. 2023), moderate rainfall events (MREs, ≥1 mm h−1) and strong rainfall events (SREs, ≥10 mm h−1), all the scores listed above were evaluated.

We first compared the performances of the models with and without U-Net. According to the RMSE and CSI values for the all test dataset shown in Fig. 7, the U-Net significantly contributes to enhance the model performance at 0–3-h lead times. We then compared the errors of the deep learning model including multiple linear regression and those using only a single optical flow algorithm. Hereafter, the regression model indicates the deep learning model using a multiple linear regression and U-Net, whereas the deep learning model using a single optical flow algorithm and U-Net refers as the single model.

Fig. 7.
Fig. 7.

(left) The root-mean-squared error (RMSE) and (right) critical success index (CSI) values at 0–3-h lead times for the whole test dataset. Here, black and red lines indicate the models with and without U-Net, respectively.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

In Fig. 8, we compared the RMSE values (top panels) measured by the regression model and single model adopting six different algorithms. A total of 10 samples were randomly chosen for each precipitation type to evaluate the performance (total number of samples is 40). We confirmed that the improvement lasts up to a 3-h lead time. The regression model outperforms all the other models using a single optical flow algorithm. Compared with the results obtained by Deepflow, those of the regression model were better in terms of the RMSE values owing to the contribution of small-scale flow motions at 2π/k10km. By extracting the features that could minimize the error between the nowcasting results and ground truth, we interpreted that the regression model could extend the time scale for nowcasting compared with a single optical flow algorithm.

Fig. 8.
Fig. 8.

The root-mean-squared error (RMSE) values at 0–3-h lead times for the four precipitation types. Ten samples were used for each type.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

Subsequently, we report the results of precipitation nowcasting, focusing on the differences in performance according to the precipitation types. Figure 9 illustrates the examples of nowcasting outputs at a 1.5-h lead time. Optical flow techniques provide accurate predictions for precipitation passing over the mainland owing to their ability to capture the motion vectors of precipitation fields. The Central and Southern cases represent around 50% of all precipitation types in South Korea (Jo et al. 2020), indicating that the regression model proposed in this study will be highly useful for precipitation nowcasting during heavy rainfall periods from June to September in the Korean Peninsula. Conversely, for the Jeju and Isolated cases, the development or decay of precipitation cannot be captured, resulting in poor accuracy. For instance, in the Jeju case, heavy rainfall occurred on Jeju Island; however, the nowcasting predicted that the heavy rainfall was located outside the island.

Fig. 9.
Fig. 9.

Examples of the output of the nowcasting model for the four different precipitation types (mm h−1) shown in Fig. 1 at a 1.5-h lead time.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

Figure 10 displays the CSI, POD, and FAR values of four precipitation types for MREs (left panels) and SREs (right panels). As specified previously, each precipitation type includes 10 samples. A few points were noted. 1) For SREs, the performance of the Central and Southern cases is generally better than that of the Jeju case. The CSI and POD of the Central and Southern cases are higher than those of the Jeju case. Although the inflow of water vapor significantly affects the precipitation development, which the optical flow-based nowcasting cannot consider, such effects are not included in the nowcasting results by the advection of precipitation at the previous time step by an optical flow field. Thus, the nowcasting results of the Jeju case are generally underestimated. Notably, the FAR at 2–3-h lead times is smaller than those of the Central and Southern cases. 2) In the case of MREs, the CSI score of the Jeju case is slightly higher than those of the other types because of the underestimation of precipitation (i.e., the smallest FAR score). 3) The performance for the Isolated case, measured by all scores, is poorer than that of other types. This is partially because the pointwise verification shows higher FAR and lower POD values when evaluating the precipitation fields with smaller spatial scales. Moreover, the localized precipitation fields undergo rapid development or decay, making them difficult to track using the flow motion alone.

Fig. 10.
Fig. 10.

The CSI, POD, and FAR values for moderate rainfall events (MREs; ≥1 mm h−1) and strong rainfall events (SREs; ≥10 mm h−1) at 0–3-h lead times. The results for (left) MREs and (right) SREs.

Citation: Weather and Forecasting 39, 1; 10.1175/WAF-D-23-0104.1

4. Summary and discussion

In this study, we developed a regression model using multiple optical flow algorithms to improve the performance of precipitation nowcasting based on the optical flow technique using multiple optical flow algorithms. We first compared the results produced by currently available algorithms. We found that the results of PDF and PSD of the optical flow field strongly depend on the selected algorithm. In particular, the motions with the scales, 2π/k25km are affected by the regulation scheme of each algorithm, as indicated by the PSD. We then proposed a deep learning model including 1) a regression model and 2) U-Net-based network. The regression model is used for extracting the features of multispatial motions from multiple optical flow algorithms to reduce the error between the precipitation nowcasting and ground truth. The U-Net-based network is used to train multitemporal features of precipitation movement. Notably, the regression model minimizes the error between nowcasting and ground truth and outperforms a single optical flow algorithm with 0–3-h lead times. Hence, the prediction time scale for precipitation nowcasting can be extended through the regression model proposed in this study.

We further discuss the feasibility of precipitation nowcasting using optical flow, compared to the currently reported U-Net-based CNN model (e.g., Oh et al. 2023). In particular, for strong rainfall events (≥10 mm h−1), the CSI score of the optical-flow-based regression model at 0–3-h lead times is ∼0.2, and this score is comparable with the results obtained by the U-Net based CNN model trained by the dataset of the Korean Peninsula (Oh et al. 2023). Moreover, the nowcasting produced by the optical-flow-based regression model is less blurry compared with the U-Net-produced nowcasting. According to the PSD of nowcasting outputs produced by the optical-flow-based regression model, the smallest scale resolved in the output is approximately ∼5 km, whereas the effective resolution of the U-Net model is approximately ∼O(10) km (e.g., Ayzel et al. 2020; Ravuri et al. 2021).

Here, we summarize the limitations and future studies. Although the proposed model captures nonlinear motions through a deep learning network, resulting in more accurate motion fields of rainfall regions, it still slightly overestimates the motions of precipitation fields in the cases of rapidly developing and/or dissipating precipitation. In subsequent studies, to overcome this limitation, we will refine the proposed model of this study and present a better model by examining the characteristics of multitemporal motions present in the precipitation fields.

Acknowledgments.

The authors thank the anonymous referees for their constructive comments and suggestions that substantially improved the manuscript. This work was funded by the KMA Research and Development program “Developing AI technology for weather forecasting” under Grant KMA 2021-00121.

Data availability statement.

The radar data are freely available at the released website of the Korea Meteorological Administration (KMA) data (https://data.kma.go.kr/cmmn/main.do). The source code can be obtained from the GitHub repository (https://github.com/JHHa223/Optflow_code/tree/main).

REFERENCES

  • Agrawal, S., L. Barrington, C. Bromberg, J. Burge, C. Gazen, and J. Hickey, 2019: Machine learning for precipitation nowcasting from radar images. arXiv, 1912.12132v1, https://arxiv.org/abs/1912.12132.

  • Ayzel, G., M. Heistermann, and T. Winterrath, 2019: Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1). Geosci. Model Dev., 12, 13871402, https://doi.org/10.5194/gmd-12-1387-2019.

    • Search Google Scholar
    • Export Citation
  • Ayzel, G., T. Scheffer, and M. Heistermann, 2020: RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev., 13, 26312644, https://doi.org/10.5194/gmd-13-2631-2020.

    • Search Google Scholar
    • Export Citation
  • Bechini, R., and V. Chandrasekar, 2017: An enhanced optical flow technique for radar nowcasting of precipitation and winds. J. Atmos. Oceanic Technol., 34, 26372658, https://doi.org/10.1175/JTECH-D-17-0110.1.

    • Search Google Scholar
    • Export Citation
  • Bouguet, J.-Y., 2000: Pyramidal implementation of the Affine Lucas Kanade feature tracker description of the algorithm. Intel Corporation Microprocessor Research Lab Tech. Rep., 10 pp., http://robots.stanford.edu/cs223b04/algo_affine_tracking.pdf.

  • Bowler, N. E. H., C. E. Pierce, and A. Seed, 2004: Development of a precipitation nowcasting algorithm based upon optical flow techniques. J. Hydrol., 288, 7491, https://doi.org/10.1016/j.jhydrol.2003.11.011.

    • Search Google Scholar
    • Export Citation
  • Bradski, G., 2000: The OpenCV library. Dr. Dobb’s J. Software Tools, 25, 120123.

  • Bridson, R., 2008: Fluid Simulation for Computer Graphics. Taylor & Francis, 246 pp.

  • Brox, T., A. Bruhn, N. Papenberg, and J. Weickert, 2004: High accuracy optical flow estimation based on a theory for warping. Computer Vision—ECCV 2004, T. Pajdla and J. Matas, Eds., Lecture Notes in Computer Science, Vol. 3024, Springer, 25–36.

  • Chase, R. J., D. R. Harrison, G. M. Lackmann, and A. McGovern, 2023: A machine learning tutorial for operational meteorology. Part II: Neural networks and deep learning. Wea. Forecasting, 38, 12711293, https://doi.org/10.1175/WAF-D-22-0187.1.

    • Search Google Scholar
    • Export Citation
  • Espeholt, L., and Coauthors, 2022: Deep learning for twelve hour precipitation forecasts. Nat. Commun., 13, 5145, https://doi.org/10.1038/s41467-022-32483-x.

    • Search Google Scholar
    • Export Citation
  • Farnebäck, G., 2003: Two-frame motion estimation based on polynomial expansion. Image Analysis SCIA 2003, J. Bigun and T. Gustavsson, Eds., Lecture Notes in Computer Science, Vol. 2749, Springer, 363–370, https://doi.org/10.1007/3-540-45103-X_50.

  • Germann, U., and I. Zawadzki, 2002: Scale-dependence of the predictability of precipitation from continental radar images. Part I: Description of the methodology. Mon. Wea. Rev., 130, 28592873, https://doi.org/10.1175/1520-0493(2002)130<2859:SDOTPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Germann, U., and I. Zawadzki, 2004: Scale dependence of the predictability of precipitation from continental radar images. Part II: Probability forecasts. J. Appl. Meteor., 43, 7489, https://doi.org/10.1175/1520-0450(2004)043<0074:SDOTPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Germann, U., I. Zawadzki, and B. Turner, 2006: Predictability of precipitation from continental radar images. Part IV: Limits to prediction. J. Atmos. Sci., 63, 20922108, https://doi.org/10.1175/JAS3735.1.

    • Search Google Scholar
    • Export Citation
  • Goodfellow, I. J., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014: Generative adversarial nets. Advances in Neural Information Processing Systems, Z. Ghahramani, Ed., Vol. 27, Curran Associates, Inc., 2672–2680.

  • Jiang, H., D. Sun, V. Jampani, M.-H. Yang, E. Learned-Miller, and J. Kautz, 2018: Super SloMo: High quality estimation of multiple intermediate frames for video interpolation. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, Institute of Electrical and Electronics Engineers, 9000–9008, https://doi.org/10.1109/CVPR.2018.00938.

  • Jo, E., C. Park, S.-W. Son, J.-W. Roh, G.-W. Lee, and Y.-H. Lee, 2020: Classification of localized heavy rainfall events in South Korea. Asia-Pac. J. Atmos. Sci., 56, 7788, https://doi.org/10.1007/s13143-019-00128-7.

    • Search Google Scholar
    • Export Citation
  • Kim, H.-L., M.-K. Suk, H.-S. Park, G.-W. Lee, and J.-S. Ko, 2016: Dual-polarization radar rainfall estimation in Korea according to raindrop shapes obtained by using a 2-D video disdrometer. Atmos. Meas. Tech., 9, 38633878, https://doi.org/10.5194/amt-9-3863-2016.

    • Search Google Scholar
    • Export Citation
  • KMA, 2018: Meteorological disaster statistics. Accessed 1 September 2021, http://www.weather.go.kr/weather/lifenindustry/disaster_02.jsp.

  • KMA, 2021: Abnormal Climate Report 2020 (in Korean). KMA Tech. Rep., 212 pp., http://www.climate.go.kr/home/bbs/view.php?code=93&bname=abnormal&vcode=6494.

  • Ko, J., K. Lee, H. Hwang, S.-G. Oh, S.-W. Son, and K. Shin, 2022: Effective training strategies for deep-learning-based precipitation nowcasting and estimation. Comput. Geosci., 161, 105072, https://doi.org/10.1016/j.cageo.2022.105072.

    • Search Google Scholar
    • Export Citation
  • Kwon, S., S.-H. Jung, and G. Lee, 2015: Inter-comparison of radar rainfall rate using constant altitude plan position indicator and hybrid surface rainfall maps. J. Hydrol., 531, 234247, https://doi.org/10.1016/j.jhydrol.2015.08.063.

    • Search Google Scholar
    • Export Citation
  • Lee, H. C., Y. H. Lee, J.-C. Ha, D.-E. Chang, A. Bellon, I. Zawadzki, and G. Lee, 2010: McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE) applied to the South Korean radar network. Part II: Real-time verification for the summer season. Asia-Pac. J. Atmos. Sci., 46, 383391, https://doi.org/10.1007/s13143-010-1009-9.

    • Search Google Scholar
    • Export Citation
  • Lucas, B. D., and T. Kanade, 1981: An iterative image registration technique with an application to stereo vision. Proc. Seventh Int. Joint Conf. on Artificial Intelligence (IJCAI), Vancouver, British Columbia, Canada, 674–679, https://www.ri.cmu.edu/pub_files/pub3/lucas_bruce_d_1981_1/lucas_bruce_d_1981_1.pdf.

  • Lyu, G., S.-H. Jung, K.-Y. Nam, S. Kwon, C.-R. Lee, and G. Lee, 2015: Improvement of radar rainfall estimation using radar reflectivity data from the hybrid lowest elevation angels. J. Korean Earth Sci. Soc., 36, 109124, https://doi.org/10.5467/JKESS.2015.36.1.109.

    • Search Google Scholar
    • Export Citation
  • Lyu, G., S.-H. Jung, Y. Oh, H.-M. Park, and G. Lee, 2017: Accuracy evaluation of composite Hybrid Surface Rainfall (HSR) using KMA weather radar network. J. Korean Earth Sci. Soc., 38, 496510, https://doi.org/10.5467/JKESS.2017.38.7.496.

    • Search Google Scholar
    • Export Citation
  • Oh, S.-G., C. Park, S.-W. Son, J. Ko, K. Shin, S. Kim, and J. Park, 2023: Evaluation of deep-learning-based very short-term rainfall forecasts in South Korea. Asia-Pac. J. Atmos. Sci., 59, 239255, https://doi.org/10.1007/s13143-022-00310-4.

    • Search Google Scholar
    • Export Citation
  • Park, C., and Coauthors, 2021: Record-breaking summer rainfall in South Korea in 2020: Synoptic characteristics and the role of large-scale circulations. Mon. Wea. Rev., 149, 30853100, https://doi.org/10.1175/MWR-D-21-0051.1.

    • Search Google Scholar
    • Export Citation
  • Pulkkinen, S., D. Nerini, A. A. Pérez Hortal, C. Velasco-Forero, A. Seed, U. Germann, and L. Foresti, 2019: Pysteps: An open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev., 12, 41854219, https://doi.org/10.5194/gmd-12-4185-2019.

    • Search Google Scholar
    • Export Citation
  • Ravuri, S., and Coauthors, 2021: Skilful precipitation nowcasting using deep generative models of radar. Nature, 597, 672677, https://doi.org/10.1038/s41586-021-03854-z.

    • Search Google Scholar
    • Export Citation
  • Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Lecture Notes in Computer Science, Vol. 9351, Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.

  • Seed, A. W., C. E. Pierce, and K. Norman, 2013: Formulation and evaluation of a scale decomposition-based stochastic precipitation nowcast scheme. Water Resour. Res., 49, 66246641, https://doi.org/10.1002/wrcr.20536.

    • Search Google Scholar
    • Export Citation
  • Senst, T., V. Eiselein, and T. Sikora, 2012: Robust local optical flow for feature tracking. IEEE Trans. Circ. Syst. Video Tech., 22, 13771387, https://doi.org/10.1109/TCSVT.2012.2202070.

    • Search Google Scholar
    • Export Citation
  • Seo, M., Y. Choi, H. Ryu, H. Park, H. Bae, H. Lee, and W. Seo, 2022: Intermediate and future frame prediction of geostationary satellite imagery with warp and refine network. AAAI 2022 Fall Symp.: The Role of AI in Responding to Climate Challenges, Arlington, VA, 5 pp., https://www.climatechange.ai/papers/aaaifss2022/25.

  • Thorndahl, S., T. Einfalt, P. Willems, J. E. Nielsen, M.-C. ten Veldhuis, K. Arnbjerg-Nielsen, M. R. Rasmussen, and P. Molnar, 2017: Weather radar rainfall data in urban hydrology. Hydrol. Earth Syst. Sci., 21, 13591380, https://doi.org/10.5194/hess-21-1359-2017.

    • Search Google Scholar
    • Export Citation
  • Turner, B. J., I. Zawadzki, and U. Germann, 2004: Predictability of precipitation from continental radar images. Part III: Operational nowcasting implementation (MAPLE). J. Appl. Meteor., 43, 231248, https://doi.org/10.1175/1520-0450(2004)043<0231:POPFCR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Vandal, T. J., and R. R. Nemani, 2021: Temporal interpolation of geostationary satellite imagery with optical flow. IEEE Trans. Neural Netw. Learn. Syst., 34, 32453254, https://doi.org/10.1109/TNNLS.2021.3101742.

    • Search Google Scholar
    • Export Citation
  • Wedel, A., T. Pock, C. Zach, H. Bischof, and D. Cremers, 2009: An improved algorithm for TV-L1 optical flow. Statistical and Geometrical Approaches to Visual Motion Analysis, D. Cremers et al., Eds., Lecture Notes in Computer Science, Vol. 5604, Springer, 23–45, https://doi.org/10.1007/978-3-642-03061-1_2.

  • Weinzaepfel, P., J. Revaud, Z. Harchaoui, and C. Schmid, 2013: DeepFlow: Large displacement optical flow with deep matching. 2013 IEEE Int. Conf. on Computer Vision, Sydney, New South Wales, Australia, Institute of Electrical and Electronics Engineers, 1385–1392, https://doi.org/10.1109/ICCV.2013.175.

  • Wulff, J., and M. J. Black, 2015: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Institute of Electrical and Electronics Engineers, 120–130, https://doi.org/10.1109/CVPR.2015.7298607.

  • Zhang, Y., M. Long, K. Chen, L. Xing, R. Jin, M. I. Jordan, and J. Wang, 2023: Skilful nowcasting of extreme precipitation with NowcastNet. Nature, 619, 526532, https://doi.org/10.1038/s41586-023-06184-4.

    • Search Google Scholar
    • Export Citation
Save
  • Agrawal, S., L. Barrington, C. Bromberg, J. Burge, C. Gazen, and J. Hickey, 2019: Machine learning for precipitation nowcasting from radar images. arXiv, 1912.12132v1, https://arxiv.org/abs/1912.12132.

  • Ayzel, G., M. Heistermann, and T. Winterrath, 2019: Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1). Geosci. Model Dev., 12, 13871402, https://doi.org/10.5194/gmd-12-1387-2019.

    • Search Google Scholar
    • Export Citation
  • Ayzel, G., T. Scheffer, and M. Heistermann, 2020: RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev., 13, 26312644, https://doi.org/10.5194/gmd-13-2631-2020.

    • Search Google Scholar
    • Export Citation
  • Bechini, R., and V. Chandrasekar, 2017: An enhanced optical flow technique for radar nowcasting of precipitation and winds. J. Atmos. Oceanic Technol., 34, 26372658, https://doi.org/10.1175/JTECH-D-17-0110.1.

    • Search Google Scholar
    • Export Citation
  • Bouguet, J.-Y., 2000: Pyramidal implementation of the Affine Lucas Kanade feature tracker description of the algorithm. Intel Corporation Microprocessor Research Lab Tech. Rep., 10 pp., http://robots.stanford.edu/cs223b04/algo_affine_tracking.pdf.

  • Bowler, N. E. H., C. E. Pierce, and A. Seed, 2004: Development of a precipitation nowcasting algorithm based upon optical flow techniques. J. Hydrol., 288, 7491, https://doi.org/10.1016/j.jhydrol.2003.11.011.

    • Search Google Scholar
    • Export Citation
  • Bradski, G., 2000: The OpenCV library. Dr. Dobb’s J. Software Tools, 25, 120123.

  • Bridson, R., 2008: Fluid Simulation for Computer Graphics. Taylor & Francis, 246 pp.

  • Brox, T., A. Bruhn, N. Papenberg, and J. Weickert, 2004: High accuracy optical flow estimation based on a theory for warping. Computer Vision—ECCV 2004, T. Pajdla and J. Matas, Eds., Lecture Notes in Computer Science, Vol. 3024, Springer, 25–36.

  • Chase, R. J., D. R. Harrison, G. M. Lackmann, and A. McGovern, 2023: A machine learning tutorial for operational meteorology. Part II: Neural networks and deep learning. Wea. Forecasting, 38, 12711293, https://doi.org/10.1175/WAF-D-22-0187.1.

    • Search Google Scholar
    • Export Citation
  • Espeholt, L., and Coauthors, 2022: Deep learning for twelve hour precipitation forecasts. Nat. Commun., 13, 5145, https://doi.org/10.1038/s41467-022-32483-x.

    • Search Google Scholar
    • Export Citation
  • Farnebäck, G., 2003: Two-frame motion estimation based on polynomial expansion. Image Analysis SCIA 2003, J. Bigun and T. Gustavsson, Eds., Lecture Notes in Computer Science, Vol. 2749, Springer, 363–370, https://doi.org/10.1007/3-540-45103-X_50.

  • Germann, U., and I. Zawadzki, 2002: Scale-dependence of the predictability of precipitation from continental radar images. Part I: Description of the methodology. Mon. Wea. Rev., 130, 28592873, https://doi.org/10.1175/1520-0493(2002)130<2859:SDOTPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Germann, U., and I. Zawadzki, 2004: Scale dependence of the predictability of precipitation from continental radar images. Part II: Probability forecasts. J. Appl. Meteor., 43, 7489, https://doi.org/10.1175/1520-0450(2004)043<0074:SDOTPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Germann, U., I. Zawadzki, and B. Turner, 2006: Predictability of precipitation from continental radar images. Part IV: Limits to prediction. J. Atmos. Sci., 63, 20922108, https://doi.org/10.1175/JAS3735.1.

    • Search Google Scholar
    • Export Citation
  • Goodfellow, I. J., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, 2014: Generative adversarial nets. Advances in Neural Information Processing Systems, Z. Ghahramani, Ed., Vol. 27, Curran Associates, Inc., 2672–2680.

  • Jiang, H., D. Sun, V. Jampani, M.-H. Yang, E. Learned-Miller, and J. Kautz, 2018: Super SloMo: High quality estimation of multiple intermediate frames for video interpolation. 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, Institute of Electrical and Electronics Engineers, 9000–9008, https://doi.org/10.1109/CVPR.2018.00938.

  • Jo, E., C. Park, S.-W. Son, J.-W. Roh, G.-W. Lee, and Y.-H. Lee, 2020: Classification of localized heavy rainfall events in South Korea. Asia-Pac. J. Atmos. Sci., 56, 7788, https://doi.org/10.1007/s13143-019-00128-7.

    • Search Google Scholar
    • Export Citation
  • Kim, H.-L., M.-K. Suk, H.-S. Park, G.-W. Lee, and J.-S. Ko, 2016: Dual-polarization radar rainfall estimation in Korea according to raindrop shapes obtained by using a 2-D video disdrometer. Atmos. Meas. Tech., 9, 38633878, https://doi.org/10.5194/amt-9-3863-2016.

    • Search Google Scholar
    • Export Citation
  • KMA, 2018: Meteorological disaster statistics. Accessed 1 September 2021, http://www.weather.go.kr/weather/lifenindustry/disaster_02.jsp.

  • KMA, 2021: Abnormal Climate Report 2020 (in Korean). KMA Tech. Rep., 212 pp., http://www.climate.go.kr/home/bbs/view.php?code=93&bname=abnormal&vcode=6494.

  • Ko, J., K. Lee, H. Hwang, S.-G. Oh, S.-W. Son, and K. Shin, 2022: Effective training strategies for deep-learning-based precipitation nowcasting and estimation. Comput. Geosci., 161, 105072, https://doi.org/10.1016/j.cageo.2022.105072.

    • Search Google Scholar
    • Export Citation
  • Kwon, S., S.-H. Jung, and G. Lee, 2015: Inter-comparison of radar rainfall rate using constant altitude plan position indicator and hybrid surface rainfall maps. J. Hydrol., 531, 234247, https://doi.org/10.1016/j.jhydrol.2015.08.063.

    • Search Google Scholar
    • Export Citation
  • Lee, H. C., Y. H. Lee, J.-C. Ha, D.-E. Chang, A. Bellon, I. Zawadzki, and G. Lee, 2010: McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE) applied to the South Korean radar network. Part II: Real-time verification for the summer season. Asia-Pac. J. Atmos. Sci., 46, 383391, https://doi.org/10.1007/s13143-010-1009-9.

    • Search Google Scholar
    • Export Citation
  • Lucas, B. D., and T. Kanade, 1981: An iterative image registration technique with an application to stereo vision. Proc. Seventh Int. Joint Conf. on Artificial Intelligence (IJCAI), Vancouver, British Columbia, Canada, 674–679, https://www.ri.cmu.edu/pub_files/pub3/lucas_bruce_d_1981_1/lucas_bruce_d_1981_1.pdf.

  • Lyu, G., S.-H. Jung, K.-Y. Nam, S. Kwon, C.-R. Lee, and G. Lee, 2015: Improvement of radar rainfall estimation using radar reflectivity data from the hybrid lowest elevation angels. J. Korean Earth Sci. Soc., 36, 109124, https://doi.org/10.5467/JKESS.2015.36.1.109.

    • Search Google Scholar
    • Export Citation
  • Lyu, G., S.-H. Jung, Y. Oh, H.-M. Park, and G. Lee, 2017: Accuracy evaluation of composite Hybrid Surface Rainfall (HSR) using KMA weather radar network. J. Korean Earth Sci. Soc., 38, 496510, https://doi.org/10.5467/JKESS.2017.38.7.496.

    • Search Google Scholar
    • Export Citation
  • Oh, S.-G., C. Park, S.-W. Son, J. Ko, K. Shin, S. Kim, and J. Park, 2023: Evaluation of deep-learning-based very short-term rainfall forecasts in South Korea. Asia-Pac. J. Atmos. Sci., 59, 239255, https://doi.org/10.1007/s13143-022-00310-4.

    • Search Google Scholar
    • Export Citation
  • Park, C., and Coauthors, 2021: Record-breaking summer rainfall in South Korea in 2020: Synoptic characteristics and the role of large-scale circulations. Mon. Wea. Rev., 149, 30853100, https://doi.org/10.1175/MWR-D-21-0051.1.

    • Search Google Scholar
    • Export Citation
  • Pulkkinen, S., D. Nerini, A. A. Pérez Hortal, C. Velasco-Forero, A. Seed, U. Germann, and L. Foresti, 2019: Pysteps: An open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev., 12, 41854219, https://doi.org/10.5194/gmd-12-4185-2019.

    • Search Google Scholar
    • Export Citation
  • Ravuri, S., and Coauthors, 2021: Skilful precipitation nowcasting using deep generative models of radar. Nature, 597, 672677, https://doi.org/10.1038/s41586-021-03854-z.

    • Search Google Scholar
    • Export Citation
  • Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Lecture Notes in Computer Science, Vol. 9351, Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.

  • Seed, A. W., C. E. Pierce, and K. Norman, 2013: Formulation and evaluation of a scale decomposition-based stochastic precipitation nowcast scheme. Water Resour. Res., 49, 66246641, https://doi.org/10.1002/wrcr.20536.

    • Search Google Scholar
    • Export Citation
  • Senst, T., V. Eiselein, and T. Sikora, 2012: Robust local optical flow for feature tracking. IEEE Trans. Circ. Syst. Video Tech., 22, 13771387, https://doi.org/10.1109/TCSVT.2012.2202070.

    • Search Google Scholar
    • Export Citation
  • Seo, M., Y. Choi, H. Ryu, H. Park, H. Bae, H. Lee, and W. Seo, 2022: Intermediate and future frame prediction of geostationary satellite imagery with warp and refine network. AAAI 2022 Fall Symp.: The Role of AI in Responding to Climate Challenges, Arlington, VA, 5 pp., https://www.climatechange.ai/papers/aaaifss2022/25.

  • Thorndahl, S., T. Einfalt, P. Willems, J. E. Nielsen, M.-C. ten Veldhuis, K. Arnbjerg-Nielsen, M. R. Rasmussen, and P. Molnar, 2017: Weather radar rainfall data in urban hydrology. Hydrol. Earth Syst. Sci., 21, 13591380, https://doi.org/10.5194/hess-21-1359-2017.

    • Search Google Scholar
    • Export Citation
  • Turner, B. J., I. Zawadzki, and U. Germann, 2004: Predictability of precipitation from continental radar images. Part III: Operational nowcasting implementation (MAPLE). J. Appl. Meteor., 43, 231248, https://doi.org/10.1175/1520-0450(2004)043<0231:POPFCR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Vandal, T. J., and R. R. Nemani, 2021: Temporal interpolation of geostationary satellite imagery with optical flow. IEEE Trans. Neural Netw. Learn. Syst., 34, 32453254, https://doi.org/10.1109/TNNLS.2021.3101742.

    • Search Google Scholar
    • Export Citation
  • Wedel, A., T. Pock, C. Zach, H. Bischof, and D. Cremers, 2009: An improved algorithm for TV-L1 optical flow. Statistical and Geometrical Approaches to Visual Motion Analysis, D. Cremers et al., Eds., Lecture Notes in Computer Science, Vol. 5604, Springer, 23–45, https://doi.org/10.1007/978-3-642-03061-1_2.

  • Weinzaepfel, P., J. Revaud, Z. Harchaoui, and C. Schmid, 2013: DeepFlow: Large displacement optical flow with deep matching. 2013 IEEE Int. Conf. on Computer Vision, Sydney, New South Wales, Australia, Institute of Electrical and Electronics Engineers, 1385–1392, https://doi.org/10.1109/ICCV.2013.175.

  • Wulff, J., and M. J. Black, 2015: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, MA, Institute of Electrical and Electronics Engineers, 120–130, https://doi.org/10.1109/CVPR.2015.7298607.

  • Zhang, Y., M. Long, K. Chen, L. Xing, R. Jin, M. I. Jordan, and J. Wang, 2023: Skilful nowcasting of extreme precipitation with NowcastNet. Nature, 619, 526532, https://doi.org/10.1038/s41586-023-06184-4.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Precipitation maps (mm h−1) for the four precipitation types. Data are obtained from 2000 UTC 28 Jul 2020 (Central case), 1800 UTC 31 Jul 2020 (Isolated case), 1500 UTC 7 Aug 2020 (Southern case), and 0500 UTC 27 Jul 2020 (Jeju case). While heavy rainfall pixels with 30 mm h−1 or more are represented in red, the maximum rainfall intensity for each case is as follows: Central case: 70 mm h−1, Isolated case: 103 mm h−1, Southern case: 64 mm h−1, and Jeju case: 86 mm h−1.

  • Fig. 2.

    (top) Two precipitation frames with 10-min intervals are shown. Here, the case 0500 UTC 27 Jul 2020 is chosen as a representative case. (middle),(bottom) The optical flow speed, Vopt=U2+V2 (m s−1), estimated by six different algorithms.

  • Fig. 3.

    (a) A polar coordinate system for the optical flow field. Here, flow angles, θopt = 0° and 180°, correspond to east and west directions, while flow angles, θopt = −90° and 90°, correspond to south and north directions, respectively. (b) The probabilistic distribution function (PDF) of flow speed. (c) The probabilistic distribution function (PDF) of flow angle. (d) The power spectral density (PSD) P(k) of flow speed.

  • Fig. 4.

    An overview of a model for precipitation nowcasting. This model comprises a regression model and U-Net-based network. The regression model minimizes the error between the linearly extrapolated future frame and ground truth. The U-Net-based network trains time-sequence images and extracts the multitemporal features. The structure of the adopted U-Net is depicted in Fig. 6.

  • Fig. 5.

    The time evolution of the coefficients ωj for the four different precipitation types shown in Fig. 1.

  • Fig. 6.

    Configuration of the U-Net model used in this study. The model contains four downsamplings and four upsamplings. Notably, the pixel number of input image is 960 × 960 with three channels (i.e., three different time steps, t − 10, t, and t + 10) and the pixel number of the output image is 960 × 960 with one channel (i.e., one time step, t + 10).

  • Fig. 7.

    (left) The root-mean-squared error (RMSE) and (right) critical success index (CSI) values at 0–3-h lead times for the whole test dataset. Here, black and red lines indicate the models with and without U-Net, respectively.

  • Fig. 8.

    The root-mean-squared error (RMSE) values at 0–3-h lead times for the four precipitation types. Ten samples were used for each type.

  • Fig. 9.

    Examples of the output of the nowcasting model for the four different precipitation types (mm h−1) shown in Fig. 1 at a 1.5-h lead time.

  • Fig. 10.

    The CSI, POD, and FAR values for moderate rainfall events (MREs; ≥1 mm h−1) and strong rainfall events (SREs; ≥10 mm h−1) at 0–3-h lead times. The results for (left) MREs and (right) SREs.

All Time Past Year Past 30 Days
Abstract Views 236 236 0
Full Text Views 1872 1872 114
PDF Downloads 1692 1691 120