Improving Seasonal Prediction of Summer Precipitation in the Middle–Lower Reaches of the Yangtze River Using a TU-Net Deep Learning Approach

Shuxian Yang aInstitute for Climate and Application Research, Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Key Laboratory of Meteorological Disaster of Ministry of Education, International Research Laboratory of Climate and Environment Change, Nanjing University of Information Science and Technology, Nanjing, China

Search for other papers by Shuxian Yang in
Current site
Google Scholar
PubMed
Close
,
Fenghua Ling aInstitute for Climate and Application Research, Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Key Laboratory of Meteorological Disaster of Ministry of Education, International Research Laboratory of Climate and Environment Change, Nanjing University of Information Science and Technology, Nanjing, China

Search for other papers by Fenghua Ling in
Current site
Google Scholar
PubMed
Close
,
Yue Li aInstitute for Climate and Application Research, Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Key Laboratory of Meteorological Disaster of Ministry of Education, International Research Laboratory of Climate and Environment Change, Nanjing University of Information Science and Technology, Nanjing, China

Search for other papers by Yue Li in
Current site
Google Scholar
PubMed
Close
, and
Jing-Jia Luo aInstitute for Climate and Application Research, Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters, Key Laboratory of Meteorological Disaster of Ministry of Education, International Research Laboratory of Climate and Environment Change, Nanjing University of Information Science and Technology, Nanjing, China

Search for other papers by Jing-Jia Luo in
Current site
Google Scholar
PubMed
Close
Open access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

The two-step U-Net model (TU-Net) contains a western North Pacific subtropical high (WNPSH) prediction model and a precipitation prediction model fed by the WNPSH predictions, oceanic heat content, and surface temperature. The data-driven forecast model provides improved 4-month lead predictions of the WNPSH and precipitation in the middle and lower reaches of the Yangtze River (MLYR), which has important implications for water resources management and precipitation-related disaster prevention in China. When compared with five state-of-the-art dynamical climate models including the Climate Forecast System of Nanjing University of Information Science and Technology (NUIST-CFS1.0) and four models participating in the North American Multi-Model Ensemble (NMME) project, the TU-Net produces comparable skills in forecasting 4-month lead geopotential height and winds at the 500- and 850-hPa levels. For the 4-month lead prediction of precipitation over the MLYR region, the TU-Net has the best correlation scores and mean latitude-weighted RMSE in each summer month and in boreal summer [June–August (JJA)], and pattern correlation coefficient scores are slightly lower than the dynamical models only in June and JJA. In addition, the results show that the constructed TU-Net is also superior to most of the dynamical models in predicting 2-m air temperature in the MLYR region at a 4-month lead. Thus, the deep learning-based TU-Net model can provide a rapid and inexpensive way to improve the seasonal prediction of summer precipitation and 2-m air temperature over the MLYR region.

Significance Statement

The purpose of this study is to examine the seasonal predictive skill of the western North Pacific subtropical high anomalies and summer rainfall anomalies over the middle and lower reaches of the Yangtze River region by means of deep learning methods. Our deep learning model provides a rapid and inexpensive way to improve the seasonal prediction of summer precipitation as well as 2-m air temperature. The work has important implications for water resources management and precipitation-related disaster prevention in China and can be extended in the future to predict other climate variables as well.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jing-Jia Luo, jjluo@nuist.edu.cn

Abstract

The two-step U-Net model (TU-Net) contains a western North Pacific subtropical high (WNPSH) prediction model and a precipitation prediction model fed by the WNPSH predictions, oceanic heat content, and surface temperature. The data-driven forecast model provides improved 4-month lead predictions of the WNPSH and precipitation in the middle and lower reaches of the Yangtze River (MLYR), which has important implications for water resources management and precipitation-related disaster prevention in China. When compared with five state-of-the-art dynamical climate models including the Climate Forecast System of Nanjing University of Information Science and Technology (NUIST-CFS1.0) and four models participating in the North American Multi-Model Ensemble (NMME) project, the TU-Net produces comparable skills in forecasting 4-month lead geopotential height and winds at the 500- and 850-hPa levels. For the 4-month lead prediction of precipitation over the MLYR region, the TU-Net has the best correlation scores and mean latitude-weighted RMSE in each summer month and in boreal summer [June–August (JJA)], and pattern correlation coefficient scores are slightly lower than the dynamical models only in June and JJA. In addition, the results show that the constructed TU-Net is also superior to most of the dynamical models in predicting 2-m air temperature in the MLYR region at a 4-month lead. Thus, the deep learning-based TU-Net model can provide a rapid and inexpensive way to improve the seasonal prediction of summer precipitation and 2-m air temperature over the MLYR region.

Significance Statement

The purpose of this study is to examine the seasonal predictive skill of the western North Pacific subtropical high anomalies and summer rainfall anomalies over the middle and lower reaches of the Yangtze River region by means of deep learning methods. Our deep learning model provides a rapid and inexpensive way to improve the seasonal prediction of summer precipitation as well as 2-m air temperature. The work has important implications for water resources management and precipitation-related disaster prevention in China and can be extended in the future to predict other climate variables as well.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jing-Jia Luo, jjluo@nuist.edu.cn

1. Introduction

Prediction of seasonal precipitation is recognized as one of the major concerns in meteorological fields as accurate prediction has significant impacts on water resources management, precipitation-related disaster prevention, and agricultural planning (e.g., Venkata Ramana et al. 2013; Zhu et al. 2015; Tao et al. 2021a,b). However, it remains a great challenge to produce accurate predictions of seasonal precipitation due to its complex mechanisms and strong nonlinearity in response to various scales of weather and climate variabilities. In addition, some previous studies showed that the reliability of rainfall forecasts could also be vulnerable to climate change (e.g., Loo et al. 2015; Sheshadri et al. 2021). The middle and lower reaches of the Yangtze River (MLYR) is one of the most important agricultural, economical, and industrial regions with a dense population in China, where severe flooding or drought events due to abnormal precipitation variations could often bring about great damage (e.g., Zhou et al. 2021; Ying et al. 2022). For example, the Yangtze floods in the early summer of 2020 affected more than 35 million people, and left at least 278 dead or missing, with a direct economic loss at $32 billion (U.S. dollars; e.g., Kramer and Ware 2020). In the boreal summer of 2022, the MLYR region suffered from long-lasting severe dry and hot conditions, which also brought about enormous damage to agriculture and threatened human beings’ health (e.g., Liu 2022; Zhao et al. 2022). Therefore, it is crucial to develop forecast methods for improving seasonal prediction of the summer precipitation over the MLYR region.

In the past decades, there are two widely used methods for seasonal rainfall prediction: statistical or empirical methods and dynamical climate forecast systems. The statistical or empirical methods seek the lagged relationship between rainfall and various climate drivers or precursors, such as sea surface temperature (SST), snow cover, monsoonal winds, blocking highs, and subtropical highs to predict precipitation (e.g., Drosdowsky and Chambers 2001; Schepen et al. 2012; Bett et al. 2021). Although statistical methods are easy to operate, it is a challenge for them to deal with unprecedented events and nonlinear relationships in a complicated climate system. By contrast, the climate forecast system is built on a suite of physical laws with the initial and boundary conditions added to the partial differential equations, so it is able to handle nonlinear climate signals associated with a variety of teleconnections over the globe (see, e.g., Luo et al. 2016 for a review). However, the uncertainties that existed in dynamical climate forecast systems for seasonal predictions originate not only from the selection of atmospheric and oceanic initial conditions but also from the model physics. Moreover, the development of climate forecast systems is very difficult, time-consuming, expensive, and complicated (e.g., Luo et al. 2008, 2016). With the advent of the big data era, deep learning has dramatically influenced many fields by discovering intricate structures within mega datasets (e.g., LeCun et al. 2015). In particular, geoscience researchers find its great potential in predicting weather and climate by revealing complex linear or nonlinear relationships between predictors and predictands hidden in mega-data (e.g., Shi et al. 2015, 2017; Ham et al. 2019; Mouatadid et al. 2021; Schultz et al. 2021; Kashinath et al. 2021; Singh et al. 2022; Ling et al. 2022).

Previous studies have demonstrated that deep learning methods show outstanding abilities to exploit spatiotemporal structures and fit nonlinear functions at high speeds of computation. Given these merits, we built a deep learning model to predict summer precipitation over the MLYR region. To well address the modeling of complex spatiotemporal relations including lagged long-distance relationships (teleconnections) between variables, it is necessary to select key predictors to forecast precipitation. It is well-known that the distribution of seasonal precipitation in eastern China is related to the variations of intensity, structure, and location of the western North Pacific subtropical high (WNPSH), which is a key system to the East Asian climate (e.g., Tao and Xu 1962; Huang 1963; He et al. 2015). The rain belts, drought, or floods in South China, the Yangtze River basin, and North China are directly linked to the shifting and lingering of the WNPSH. Considering the great influence of the WNPSH on summer precipitation in China, the data associated with the WNPSH are input into the deep learning model to predict the MLYR summer precipitation.

Recently, there has been an increasing interest in foundation models that are trained on broad data at scale in order to be adaptable to a wide range of downstream tasks and therefore to achieve better results (e.g., Bommasani et al. 2021). Based on the two-step idea of foundation models, we attempt to construct a prediction model that can symbolize the variation of the WNPSH. Then, we use the results of the WNPSH prediction model to predict the MLYR precipitation as precipitation is highly related to contemporaneous WNPSH circulation. It is also known that the deviations from the long-term mean can reflect the interannual variations of climate systems, such that precipitation anomalies are often used for monthly or seasonal precipitation prediction (e.g., Ying et al. 2022). Thus, the objective of this study is to examine the seasonal predictive skills of the WNPSH anomalies and summer rainfall anomalies over the MLYR region by means of deep learning methods.

The remainder of this paper is structured as follows: In section 2, the dataset to be analyzed and the architecture of our deep learning model used for seasonal forecasts are introduced, followed by the descriptions of the methods used in this study. In section 3, the results are elaborated to show the improvements of deep learning methods in forecasting summer WNPSH and precipitation in eastern China. Discussion and conclusions will be given in section 4.

2. Data and methods

a. Dataset

While high-quality climate observational datasets were available in some regions for the past 100 years, the global climate observational datasets were only available in the past 40 years after the establishment of the global observation and monitoring networks (e.g., Karpatne and Kumar 2017). Due to the scarcity of climate observations, it is difficult to obtain sufficient training samples for climate problems. The small size sample may cause serious overfitting of the model during training processes. To greatly increase the sample size of training data, we utilize the historical simulations of 14 selected climate models that participated in phase 6 of the Coupled Model Intercomparison Project (CMIP6) (Table 1) from 1871 to 2012 to feed our model as Table 2 shows. Note that the CMIP6 model historical simulations cannot reproduce the observed phase of interannual variations and hence are independent from the observations. Concerning the biases in the CMIP6 model data, we use the transfer learning strategy (e.g., Yosinski et al. 2014) to pretrain the deep learning model fed by CMIP6 model data input, and then fine-tune the model using reanalysis data. The reanalysis data are collected by the Simple Ocean Data Assimilation (SODA, an ocean reanalysis dataset consisting of gridded variables over the global ocean) and NOAA–CIRES–DOE Twentieth Century Reanalysis (20CR) during 1871–1973. Following Ham et al. (2019), we leave a 10-yr gap between the periods of the training and the test datasets deliberately to eliminate the possible effect of oceanic memory. Here, we use the test dataset derived from the NCEP–NCAR Reanalysis Project at the NOAA/ESRL Physical Sciences Laboratory, and the regridded monthly precipitation in China at a resolution of 0.5° × 0.5° (Zhao et al. 2014). The test period is from 1982 to 2020. To compare the prediction performance of the deep learning model, we use corresponding data from the Climate Forecast System of Nanjing University of Information Science and Technology (NUIST-CFS1.0; previously SINTEX-F) (Luo et al. 2008; He et al. 2020) and four dynamical climate models from the North American Multi-Model Ensemble (NMME) project (Table 1) that provide hindcast data available during the same test period. In this study, the ensemble mean of the four models from the NMME project is called the results of the NMME model. All datasets are interpolated into a resolution of 5° × 5° to extract large-scale features without being limited by graphical processing unit (GPU) memory except for the precipitation data at a resolution of 1° × 1.125°, for which high resolution of precipitation gives access to more precise results. All datasets are preprocessed with mean normalization to range from −1 to 1.

Table 1

List of the 14 CMIP6 models used to train the TU-Net model and the 4 prediction models provided by the NMME project.

Table 1
Table 2

Details of the datasets used in this study.

Table 2

b. Two-step U-Net model

The deep learning model used in this study is derived from the U-Net model proposed by Ronneberger et al. (2015). It is a typical encoder–decoder structure with a U-shaped architecture. The most distinctive trait is concatenation, known as skip connection, for reducing the loss of border pixels in every convolution. Such a network can be trained end to end from very few images and allowed to learn invariance to deformations without the need to see the transformations in the annotated ground truth. What is more, precipitation is characterized by high nonlinearity and discontinuity. Hence, we take advantage of the U-Net model to reveal complex relationships, automatically extract useful and significant predictors from added atmosphere general circulations, and even provide valuable insights into the statistical linkage inside the climate system (e.g., Jin et al. 2022).

Accordingly, our deep learning model, the two-step U-Net model (TU-Net), can be interpreted as a combination of two U-Net models including the WNPSH prediction model and the precipitation prediction model (Fig. 1a). However, different from the network architecture presented by Ronneberger et al. (2015), we use convolution operations (rather than max-pooling operations) for down-sampling in order to retain pixel information as much as possible and not increase parameters (e.g., Radford et al. 2015). Our approach is described as follows:

  1. 12 oceanic and atmospheric predictors were input into the WNPSH prediction model to predict the summer geopotential height and the winds at the middle and low troposphere;

  2. the predicted variables generated in step 1, along with 4-month lead oceanic heat content and surface temperature, were fed into the precipitation prediction model to predict summer precipitation.

Fig. 1.
Fig. 1.

The architecture of the TU-Net used for the precipitation forecast: (a) flowchart of the TU-Net and (b),(c) inner structures of the two U-Net models in the TU-Net. The x, y, z size is provided in parentheses at the top or bottom edge of the boxes. Z500, U500, V500, Z850, U850, and V850 denote geopotential height and zonal and meridional winds at the 500- and 850-hPa levels, respectively.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

The inputs and outputs of the two U-Net models are listed in Table 3.

Table 3

Input and output information for each model.

Table 3

In this study, the processes of tuning model parameters involve adjustments to the learning rate, batch size, and epoch, while the remaining parameters use default values. The TU-Net is trained separately to predict WNPSH circulations and precipitation in June, July, August, and boreal summer [June–August (JJA)] with an identical initial learning rate of 0.0001, batch size of 256, and epoch of 500. The magnitudes of the three parameters for each month will be fine-adjusted based on the combined effects of the changes in loss function and the training duration.

1) The WNPSH prediction model

The formation and variations of the WNPSH can be explained by many factors, including land–sea thermal contrast (e.g., Wu and Liu 2003; Miyasaka and Nakamura 2005), monsoon diabatic heating (e.g., Ting 1994; Chen et al. 2001; Hoskins 1996; Rodwell and Hoskins 2001), the effects of the Tibetan Plateau (e.g., Ye and Wu 1998), air–sea interactions (e.g., Seager et al. 2003), and SST anomalies (e.g., Lu and Dong 2001; Zhou and Wang 2006; Wang et al. 2013), of which the latter two are regarded as main factors responsible for the interannual variations of the WNPSH (e.g., Yang et al. 2007; Yun et al. 2008). In addition, outgoing longwave radiation (OLR) is a critical component of the Earth’s radiation budget and is often used as a proxy for convection in tropical and subtropical regions. Thus, we select OLR, oceanic heat content (HC, vertically averaged oceanic temperature in the upper 300 m), and surface temperature (ST, characterizing the thermodynamic differences between land and sea) as predictors. To capture the lead–lag relation between upper- to lower-level circulations, geopotential height and zonal and meridional components of wind at the 200-, 500-, and 850-hPa levels are also used to forecast the WNPSH.

As Fig. 1b shows, we construct a U-Net model fed by 12 aforementioned predictors at a 4-month lead covering 0°–360°E, 87.5°S–87.5°N. Because the WNPSH is stronger and more stable in the lower- and midtroposphere, its variations are usually depicted by 500- or 850-hPa geopotential height anomalies (e.g., Tao and Zhu 1964). The East Asian summer monsoon (EASM) and the WNPSH ridge indices that play a significant role in influencing precipitation in eastern China are often calculated by zonal winds at the 500- and 850-hPa levels, respectively (e.g., Wang and Fan 1999; Wang et al. 2008; Liu et al. 2012). To construct the middle- and low-level circulation structure of the WNPSH, meridional winds at the 500- and 850-hPa levels are also used as predictands. The WNPSH prediction model has six output layers to output six circulation variables over 80°–165°E, 7.5°–47.5°N at the same time (see Fig. 1b), which shares extracted features and parameters and discovers task relatedness without the need for supervisory signals (e.g., Caruana 1997).

The model has 1 input layer (i.e., the predictors), 10 convolutional layers (i.e., “up-convolution”; Ronneberger et al. 2015), and 6 output layers (i.e., the predictands). These layers are connected by a contracting path and an expansive path. The contracting path repeated the application of 3 × 3 convolutions (padded convolutions) with two stride lengths, each followed by a rectified linear unit (ReLU) for down-sampling. At each down-sampling step, we double the number of feature channels. Every step in the expansive path consists of an up-sampling of the feature map followed by 3 × 3 convolutions (up-convolution) that halve the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, each followed by a ReLU. To extract large- and small-scale features, the output of the third convolutional layer is fed into two convolutional layers with 256 filters and kernel shapes of 5 × 5 and 3 × 3, respectively. The two feature maps from the two layers remain the same xy size but are concatenated on the channels. Here, the “concatenate” means that the two feature matrices from the two layers remain the same xy size but are stacked on the channels, that is, the number of channels is added.

Root-mean-square error (RMSE) and mean absolute error (MAE) are both widely used metrics to evaluate regression results. We want to build models that will not generate large errors. Given MAE treats absolute errors linearly, we need a metric that can penalize larger errors more harshly than smaller ones. RMSE would be a superior metric when we want to minimize large errors. For the sake of simplicity, we did not train the weights of the two, but directly calculated the loss function by averaging RMSE and MAE.

2) The precipitation prediction model

The six predictands that the WNPSH prediction model produces with the predictors at a 4-month lead are provided to the second U-Net model shown in Fig. 1c so that contemporaneous precipitation over the MLYR region (108°–125°E, 25.31°–34.31°N) can be yielded. In addition, we also input the 4-month lead ST and HC predictors into the second U-Net model again, which produces better precipitation predictions when compared with the model without the input of the lower boundary forcing (Fig. 2). This is because the MLYR summer precipitation has good relations with 4-month lead ST and HC fields (figure not shown). To ensure that their shapes are consistent with those of the WNPSH variables, the added global ST and HC first go through two convolutional layers with 64 and 128 filters, respectively, and 2 stride lengths to get the xy size of (18, 9) and concatenate with the six predictands produced by the WNPSH prediction model. Subsequently, concatenating with the corresponding feature map that has the same xy size, the feature maps are fed into 12 convolutional (and up-convolution) layers and an output layer with one filter. A tanh-activation follows each convolution (padded convolution) with 2 stride lengths, and hence the input data are restored to their initial size. Similarly, the output of the third convolutional layer is fed into two convolutional layers with 256 filters, and kernel shapes of 5 × 5 and 3 × 3, respectively, after that we concatenate them and put them into the next convolutional layer. The loss function of the model is also calculated by the average of RMSE and MAE.

Fig. 2.
Fig. 2.

Comparison of 4-month lead precipitation (mm·day−1) prediction skills over the MLYR region (108°–125°E, 25.31°–34.31°N) among TU-Net (red), two-step U-Net without surface temperature ST and oceanic heat content HC (orange), and one-step (yellow) U-Net models in the JJA season during the test period of 1982–2020. (a) Mean latitude-weighted root-mean-square error between the predictions of the three models and observational data. (b) As in (a), but for averaged pattern correlation coefficient skills during 1982–2020. (c) As in (a), but for the correlation coefficient between the time series of predicted and observed precipitation index (i.e., Cor skills).

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

3) The reasons for using the TU-Net model

It is argued whether a two-step prediction model shows outperformance in predicting summer precipitation over a one-step model. To address it, another two deep learning models were constructed for comparison. One is a two-step U-Net model without ST and HC in the second step, and the other one is a one-step U-Net model fed by the 12 predictors to predict precipitation anomalies directly. Figure 2 shows comparisons of the 4-month lead prediction skills (i.e., the predictors in February are used to predict the predictands in June and JJA, March predicts July, and April predicts August) among the TU-Net and the other two deep learning models. We define the precipitation (temperature) index as the area-averaged precipitation (temperature) anomalies over the MLYR region (108°–125°E, 25.31°–34.31°N). Three metrics are used to assess the prediction skills of summer precipitation anomalies over the MLYR region. Because the area of the grid-mesh changes with latitude, it needs to be weighted when calculating the area-averaged RMSE. Thus, we use the mean latitude-weighted RMSE (RMSEw) following the study of Rasp et al. (2020) with a smaller value indicating a more accurate forecast. RMSEw is calculated by RMSE multiplied by the weighting factor for the latitude at the jth latitude index. The formula is shown as follows:
RMSEw=1NtimetNtime1NlatNlonlatNlatlonNlonL(lat)(pt,lat,lonot,lat,lon)2,
where
L(lat)=cos(lat)1Nlati=1Nlatcos(lat);
p is the model forecast and o is the observation; Ntime, Nlon, and Nlat represent years and the grid numbers in the meridional and zonal direction; t, lat, and lon represent the time, latitude, and longitude; and L(lat) is the latitude weighting factor for the latitude at the lat latitude index. Pattern correlation coefficient (PCC) skill is used to be interpreted as the similarity of spatial patterns between the predicted and the observed values, ranging from −1 to 1, with the higher the better. The correlation coefficient between the time series of predicted and observed precipitation index (Cor) can show the similarity of two time series. Although these models display comparable RMSEw, it is evident that the one-step U-Net shows the worst PCC prediction skill and the lowest Cor skill among the three methods. In general, the TU-Net outperforms the two-step U-Net without HC and ST in almost all the forecasts except for slightly lower PCC in JJA. The Student’s t test applied to the PCC and Cor scores of the three methods (Table 4) suggests that the TU-Net achieves statistically significant PCC skills in JJA mean and Cor skills in June, July, and JJA. Given the generally better prediction skills of the TU-Net, we use the TU-Net to predict summer precipitation anomalies over the MLYR region.
Table 4

The P values of the skills of TU-Net, two-step U-Net without ST and HC, and one-step U-Net in predicting the MLYR summer precipitation based on Student’s t test (see Figs. 2b,c); P values lower than 0.05 are indicated in boldface type.

Table 4

Furthermore, this study would be more useful if further attention was paid to testing the performance of the TU-Net fed by contemporaneous WNPSH observations to the precipitation prediction model rather than the predicted values from the WNPSH prediction model. This step is called the ablation study. The result is displayed in Fig. 3, which shows the spatial distribution of the temporal correlation coefficient between the forecast and observed precipitation anomalies and RMSE values divided by the standard deviation of observations (RMSEn, i.e., normalized RMSE) in June, July, August, and JJA. As shown in Figs. 3a–d, the anomaly correlation coefficient (ACC) skills are found to exceed 0.4 in most regions and even reach 0.76 in July in some grid points. RMSEn in Figs. 3e–h are found to be lower than one in most parts of the MLYR region except for a few spots in June and JJA. In general, the forecast error smaller than one standard deviation of the observation is usually considered to be skillful, and in this regard, Figs. 3e–h shows that the predictions over most grids of the MLYR region are skillful. This suggests that the selected TU-Net method can realistically reproduce the observed relation between atmospheric circulation and summer precipitation, especially in July when precipitation is significantly correlated with the WNPSH. And a good prediction of WNPSH will help improve the prediction of the precipitation anomalies.

Fig. 3.
Fig. 3.

(a)–(d) Correlation coefficients of precipitation anomalies (mm·day−1) between the predictions and the observation in the ablation study in June, July, August, and JJA. Stippling indicates that the correlation exceeds the 95% confidence level. (e)–(h) As in (a)–(d), but for RMSE values divided by the standard deviation of observations (RMSEn).

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

3. Results

a. Performance of the WNPSH prediction model

The RMSEw metrics are used to evaluate the performance of the WNPSH prediction model and the five dynamical models. Figure 4 shows RMSEw between the 4-month lead WNPSH predictions and the observations during 1982–2020. Because the four models in the NMME project lack prediction results of 500-hPa meridional and zonal winds and 850-hPa geopotential height, we do not show their prediction performance of these variables. The result displays that the WNPSH prediction model produces comparable forecast skills with the five state-of-the-art dynamical prediction systems, which may facilitate accurate precipitation predictions next. As we can see from Fig. 4, the RMSEw scores of the TU-Net for 500-hPa winds and 850-hPa meridional wind in JJA, 850-hPa zonal winds in August, and 850-hPa geopotential height in July are close to or even exceed these dynamical climate models. Without considering the 850-hPa zonal winds in JJA and 850-hPa meridional winds, the prediction skills of TU-Net are better than both CanCM4i and Global Environmental Multiscale (GEM)–NEMO. It appears to be relatively difficult for the U-Net model to produce substantially better forecast skills of the large-scale atmospheric circulations relative to the dynamical climate models, probably because enormous efforts have been made over the past decades to improve the dynamical climate models’ performance in reproducing/predicting the large-scale features. However, the ACC prediction skills of the TU-Net for the JJA mean geopotential height at the 500-hPa level is higher than 0.5 in most regions and reaches 0.7 in some grids (Fig. 5). Overall, the TU-Net shows better prediction skills of precipitation in June and July than that in August and JJA. Although the prediction skills of TU-Net may not be statistically significantly better than the dynamical climate models, the TU-Net can compete with the current dynamical models that largely rely on supercomputing power. Hence, the deep learning models can indeed make up for the shortcomings of the current dynamical models to a certain extent as discussed above.

Fig. 4.
Fig. 4.

Comparison of the 4-month lead WNPSH prediction skills (i.e., February predicts June and JJA, March predicts July, and April predicts August) over 80°–165°E, 7.5°–47.5°N among the TU-Net and the five dynamical climate models in JJA season, showing RMSEw of the six predicted WNPSH variables, including geopotential height (gpm) and zonal and meridional winds (m·s−1) at the 500- and 850-hPa levels, as labeled.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

Fig. 5.
Fig. 5.

Correlation coefficients between the forecast and observed anomalies of WNPSH variables including (top) geopotential height (gpm) and (middle) zonal and (bottom) meridional winds (m·s−1) at the (a)–(c) 500- and (d)–(f) 850-hPa levels. Stippling indicates that the correlation exceeds the 95% confidence level.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

b. Performance of the precipitation prediction model

To compare the performance of the TU-Net in predicting the MLYR summer precipitation anomalies with that of the dynamical climate models, the skill evaluation metrics are calculated, namely ACC, RMSE, PCC, and correlation between the time series of precipitation index in the observation and prediction. To visualize the prediction skills spatially, the spatial distributions of the ACC and RMSEn, along with their differences between our deep learning model and NUIST-CFS1.0 and NMME dynamical models, are displayed in Fig. 6, which shows the prediction skills of JJA mean precipitation anomalies over the MLYR region during the period 1982–2020. It is evident that the TU-Net has positive ACC skills with lower-than-one RMSEn values over most parts of the MLYR region, although negative ACC skills and RMSEn values greater than one can be found in the coastal area of the Yangtze River delta (Figs. 6a,d). Similar skills are also produced by the NUIST-CFS1.0 (Figs. 6b,e). By contrast, while the NMME model achieves better skills along the coast and in the south, it produces much worse skills in the central area (Figs. 6c,f). Figures 6g–j further display the differences between the TU-Net and the dynamical models (the TU-Net minus NUIST-CFS1.0 and the NMME model, respectively). Although there are some places where the TU-Net seems to be less skillful than NUIST-CFS1.0, the TU-Net predicts better than NUIST-CFS1.0 in the north and south of the MLYR region (Figs. 6g,i) and more excellent than the NMME model in most parts of this region except the Yangtze River delta and the south of the MLYR region (Figs. 6h,j). Moreover, the TU-Net obtains more prominent performance in June, July, and August, especially in August (see Fig. 7, results in June and July are not shown). The results suggest that, while the overall skill of the summer precipitation prediction is still much limited due to its great challenges, the TU-Net helps improve the predictions over large parts of the MLYR region relative to the dynamical models.

Fig. 6.
Fig. 6.

The prediction skills of JJA mean precipitation at a 4-month lead (i.e., initiated from February) over the MLYR region based on the TU-Net, NUIST-CFS1.0, and NMME models: (a)–(c) Correlation coefficients between the forecast and observed anomalies of JJA mean precipitation (mm·day−1), and (d)–(f) as in (a)–(c), but for RMSEn values. Also shown are differences of the ACC skill between the TU-Net and dynamical models, giving the TU-Net minus (g) NUIST-CFS1.0 and minus (h) the NMME model, and (i),(j) as in (g) and (h), but for the difference of RMSEn values. Stippling indicates the correlation exceeds the 90% confidence level.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

Fig. 7.
Fig. 7.

As in Fig. 6, but for the August precipitation prediction skills at a 4-month lead (initiated from April).

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

Besides, we also compare PCC and Cor skills between the TU-Net and the dynamical models in JJA during the period 1982–2020 (Fig. 8). Because of the figure space limit, we do not display the year-to-year PCC values for the four models of the NMME project separately. The values in the upper-left corner of Fig. 8a denote the 39-yr average of PCC skills. In Fig. 8a, the PCC skills for more than one-half of the years in the TU-Net are greater than 0. Particularly for those years with strong external forcing, their PCC skills tend to be relatively higher. Despite a slightly lower PCC skill than CanCM4i, the TU-Net achieves a higher PCC skill than the other dynamical models (including the NMME model). The TU-Net also achieves relatively higher PCC skills in each summer month with the highest skills in July and August among all the models (see Fig. 9). Figure 8b displays how well the models can accurately predict the phase of the interannual variations of precipitation index during 1982–2020, with the correlation coefficients between observed and predicted precipitation index time series indicated in the upper-left corner. The result indicates that the TU-Net outperforms most of the dynamical models in terms of Cor skill and achieves a score of 0.39, which is the best among all the models. In addition, we can see that the precipitation index that the TU-Net predicts in 1989, 1997, 2007, and 2014 hit the ground truth and the TU-Net can well predict the amplitudes of the precipitation index in these years. Especially in JJA 2020 with extreme flooding, the predicted precipitation anomaly based on the TU-Net is closer to the observations than are the dynamical models, and the TU-Net predicts the area over which strong precipitation fell in July more accurately than the dynamical models do (figure not shown).

Fig. 8.
Fig. 8.

(a) Time series of PCC skills of forecast JJA mean precipitation anomalies at 4-month lead (mm·day−1) over the MLYR region during the period of 1982–2020 for the TU-Net, NUIST-CFS1.0, and the NMME model. The PCC skills averaged over the whole period for the TU-Net, NUIST-CFS1.0, the NMME model, and each model in the NMME project are indicated in the upper-left corner of (a). (b) As in (a), but for the time series of the MLYR precipitation index based on all of the models’ predictions and observations (mm day−1). The Cor skills are also presented in the upper-left corner of (b).

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

Fig. 9.
Fig. 9.

(a) RMSEw of the MLYR precipitation anomalies (mm·day−1) for the TU-Net and the dynamical models during the period 1982–2020 in JJA season. (b) As in (a), but for the 39-yr average of PCC skills. (c) As in (a), but for the Cor skills.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

To further demonstrate the prediction skills of the TU-Net in JJA, Fig. 9 shows the 39-yr averaged RMSEw, PCC skills, and Cor skills in June, July, August, and JJA for all the models. The result shows that the TU-Net achieves the best performance for each month in terms of RMSEw and Cor skills, while PCC skills are slightly lower than some of the dynamical models in June and JJA. To explore whether the skills are statistically significant, we performed the Student’s t test on the PCC and Cor skills of every model in predicting the MLYR summer precipitation (Table 5). We can see that the PCC skills in JJA months and Cor skills in July, August, and JJA of the TU-Net are statistically significant. In comparison with the five dynamical models, the P values of the TU-Net skills are relatively small (and hence more significant). The PCC and Cor skills of TU-Net are statistically significantly improved relative to NMME, Canadian Seasonal to Interannual Prediction System, version 2 (CanSIPSv2), and GEM-NEMO. Relative to CanSIPS-IC3, the Cor skill of TU-Net also has a statistically significant increase. The TU-Net has the best prediction skills of August precipitation, with an RMSEw score of 2.04, a PCC score of 0.18, and a Cor score of 0.44, improved by 5.6%, 157%, and 159% over the best of the dynamical models, respectively (Fig. 10).

Table 5

As in Table 4, but for P values of TU-Net, five dynamical models, and NMME (see Figs. 9b,c). The last row denotes the P values of the skills difference between the TU-Net and each of the dynamical models.

Table 5
Fig. 10.
Fig. 10.

As in Fig. 8, but for the predictions of the MLYR precipitation anomalies in August.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

In addition to the comparisons with NUIST-CFS1.0 and the four models participating in the NMME project, the U.S. National Weather Service’s National Centers for Environmental Prediction Climate Forecast System, version 2 (NCEP-CFSv2), and Japan Meteorological Agency (JMA)-CFSv2, are also used for comparison in the same evaluation way (Fig. 11). Similarly, the TU-Net shows better forecast skills than NCEP and JMA in summer months except for June during 1993–2016 (the common hindcast period of the two dynamical models). The RMSEw score of TU-Net in July decreases by 1.9% and 1.3% relative to NCEP-CFSv2 and JMA-CFSv2, respectively, and the PCC and Cor skills improve by 101% and 45% relative to JMA-CFSv2, which has higher skills than NCEP-CFSv2. Similarly, the RMSEw score of the TU-Net in August is slightly lower than JMA-CFSv2, but PCC and Cor skills are 15% and 820% higher, respectively.

Fig. 11.
Fig. 11.

The summer precipitation prediction skills of the TU-Net, NCEP, and JMA dynamical model forecasts at 4-month lead during the period 1993–2016: (a) RMSEw values between the forecast and observed precipitation anomalies (mm·day−1) over the MLYR region, (b) As in (a), but for the 24-yr average of PCC skills, and (c) as in (a), but for the Cor skills.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

4. Conclusions and discussion

The MLYR region has suffered from large precipitation variations in summer, including extreme flooding in 2020 and severe hot and dry conditions in 2022. Because of the complex mechanisms and stochastic signal influences on the summer precipitation, a skillful seasonal forecast of the MLYR summer precipitation is very challenging. Given the great social importance of summer precipitation, slightly improving its predictive skill would be of much importance. In this study, we proposed a TU-Net deep learning approach and displayed its prominent skills in predicting the WNPSH and precipitation anomalies over the MLYR region at a 4-month lead.

The TU-Net features a novel method that utilizes two U-Net models. One model is used to predict the WNPSH at a 4-month lead, while the other one is used to predict the precipitation by feeding the contemporaneous WNPSH predictions along with the additional input of oceanic heat content and surface temperature at a 4-month lead. For the prediction of the WNPSH, the TU-Net achieves the RMSEw scores of 500-hPa winds and 850-hPa meridional winds in JJA, 850-hPa zonal winds in August, and 850-hPa geopotential height in July close to or even exceeding these dynamical climate models. For the precipitation prediction, the TU-Net produces lower RMSEw scores, and higher PCC and Cor skills in JJA season than the dynamical models do. The PCC and Cor skills of TU-Net are significantly improved in comparison with NMME, CanSIPv2, and GEM-NEMO. Relative to CanSIPS-IC3, the Cor skill of TU-Net also shows a statistically significant increase. The results also show that summer precipitation over the MLYR region has a significant correlation with the WNPSH, oceanic heat content, and surface temperature (figure not shown), which provides potential seasonal predictability of the MLYR precipitation in boreal summer.

Apart from using the WNPSH prediction model to predict precipitation anomalies, we also used it to construct a temperature prediction model with the same structure as the precipitation prediction model, except that the predictand becomes 2-m air temperature anomalies. For the prediction of air temperature, The TU-Net achieves the lowest RMSEw values in each summer month, the highest PCC scores in June, July, and August, and the highest Cor scores in June, August, and JJA, respectively (Fig. 12). In addition, we can also find that the TU-Net achieves positive ACC skills of as high as 0.5 and lower-than-one RMSEn values over most parts of the MLYR region in JJA with a PCC score of 0.22 and a Cor score of 0.59 during 1982–2020. The overall prediction skills of the TU-Net are superior to the five dynamical models (Figs. 13 and 14).

Fig. 12.
Fig. 12.

As in Fig. 9, but for the prediction skills of 2-m air temperature anomalies over the MLYR region (°C).

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

Fig. 13.
Fig. 13.

As in Fig. 6, but for the prediction skills of JJA mean 2-m air temperature anomalies (°C) at a 4-month lead.

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

Fig. 14.
Fig. 14.

As in Fig. 8, but for the predictions of JJA mean anomalies of 2-m air temperature anomalies (°C).

Citation: Artificial Intelligence for the Earth Systems 2, 2; 10.1175/AIES-D-22-0078.1

In conclusion, the improved prediction of summer precipitation and 2-m air temperature anomalies over the MLYR region suggests that the two-step deep learning approach, which follows the prevailing idea of foundation models, works successfully. Similarly, this TU-Net approach can be extended in the future to predict precipitation with a higher horizontal resolution and other climate variables as well. For example, varying the number of convolutional layers can improve the resolution. The average running time of the monthly WNPSH prediction model is 32 s, and the average running time of the monthly precipitation prediction model is 10 s. Hence, the time required to predict the precipitation in the MLYR region in a single month is 42 s, which greatly reduces the calculation time and improves the calculation efficiency. Our results suggest that the cheap and fast TU-Net will provide a promising and efficient way for climate forecast in the future.

Deep learning has recently made rapid progress in forecasting various phenomena in Earth’s climate system, but there is still little research on its applications in climate prediction (e.g., Reichstein et al. 2019; Ham et al. 2019). The comparable (and even better) skills of the TU-Net method relative to the dynamical climate models indicate the great potential of deep learning methods for climate forecasts in the future. However, there are some caveats in our method. For example, it is obviously not enough to use only six circulation variables to represent the WNPSH system; the prediction skill of the WNPSH prediction model is not significantly better than the dynamical models; the accuracy of the predicted precipitation and air temperature is still not high enough; and the two-step model is not convenient to train separately. Hence, the results of the WNPSH prediction model may be limited in the applications when a variety of climate variables are involved. It would be inspiring if a large data-driven and physics-informed ensemble model rather than a small WNPSH prediction model can be constructed to predict climatic elements, such as precipitation, temperature, and winds. This certainly warrants future studies.

Acknowledgments.

This work is supported by the National Key Research and Development Program of China (Grant 2020YFA0608000), National Natural Science Foundation of China (Grant 42030605), and Alibaba Group through Alibaba Innovative Research Program. We acknowledge the High-Performance Computing of Nanjing University of Information Science and Technology for their support of this work.

Data availability statement.

Data related to this paper can be downloaded from the following public domain resources: CMIP6 (https://esgf-node.llnl.gov/projects/cmip6/); SODA, version 2.2.4 (https://climatedataguide.ucar.edu/climate-data/soda-simple-ocean-data-assimilation); NMME phase 1 (https://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/); NOAA 20CR (https://psl.noaa.gov/data/gridded/data.20thC_ReanV3.html); NCEP Reanalysis 1 (https://psl.noaa.gov/data/gridded/data.ncep.reanalysis.html).

REFERENCES

  • Bett, P. E., G. M. Martin, N. Dunstone, A. A. Scaife, H. E. Thornton, and C. Li, 2021: Seasonal rainfall forecasts for the Yangtze River basin in the extreme summer of 2020. Adv. Atmos. Sci., 38, 22122220, https://doi.org/10.1007/s00376-021-1087-x.

    • Search Google Scholar
    • Export Citation
  • Bommasani, R., and Coauthors, 2021: On the opportunities and risks of foundation models. arXiv, 2108.07258v3, https://doi.org/10.48550/arXiv.2108.07258.

  • Caruana, R., 1997: Multitask learning. Mach. Learn., 28, 4175, https://doi.org/10.1023/A:1007379606734.

  • Chen, P., M. P. Hoerling, and R. M. Dole, 2001: The origin of the subtropical anticyclones. J. Atmos. Sci., 58, 18271835, https://doi.org/10.1175/1520-0469(2001)058<1827:TOOTSA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Drosdowsky, W., and L. E. Chambers, 2001: Near-global sea surface temperature anomalies as predictors of Australian seasonal rainfall. J. Climate, 14, 16771687, https://doi.org/10.1175/1520-0442(2001)014<1677:NACNGS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ham, Y.-G., J.-H. Kim, and J.-J. Luo, 2019: Deep learning for multi-year ENSO forecasts. Nature, 573, 568572, https://doi.org/10.1038/s41586-019-1559-7.

    • Search Google Scholar
    • Export Citation
  • He, C., T. Zhou, and B. Wu, 2015: The key oceanic regions responsible for the interannual variability of the western North Pacific subtropical high and associated mechanisms. J. Meteor. Res., 29, 562575, https://doi.org/10.1007/s13351-015-5037-3.

    • Search Google Scholar
    • Export Citation
  • He, J.-Y., J.-Y. Wu, and J.-J. Luo, 2020: Introduction to climate forecast system version 1.0 of Nanjing University of Information Science and Technology (in Chinese). Daqi Kexue Xuebao, 43, 128143.

    • Search Google Scholar
    • Export Citation
  • Hoskins, B., 1996: On the existence and strength of the summer subtropical anticyclones: Bernhard Haurwitz Memorial Lecture. Bull. Amer. Meteor. Soc., 77, 12871292, https://doi.org/10.1175/1520-0477-77.6.1279.

    • Search Google Scholar
    • Export Citation
  • Huang, S.-S., 1963: A study of the longitudinal movement and its forecasting of subtropical anticyclones (in Chinese). Acta Meteor. Sin., 3, 320332, https://doi.org/10.11676/qxxb1963.030.

    • Search Google Scholar
    • Export Citation
  • Jin, W., Y. Luo, T. Wu, X. Huang, W. Xue, and C. Yu, 2022: Deep learning for seasonal precipitation prediction over China. J. Meteor. Res., 36, 271281, https://doi.org/10.1007/s13351-022-1174-7.

    • Search Google Scholar
    • Export Citation
  • Karpatne, A., and V. Kumar, 2017: Big data in climate: Opportunities and challenges for machine learning. 23rd Int. Conf. on Knowledge Discovery Data Mining, Halifax, NS, Canada, Association for Computing Machinery, 21–22, https://doi.org/10.1145/3097983.3105810.

  • Kashinath, K., and Coauthors, 2021: Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. Roy. Soc., A379, 20200093, https://doi.org/10.1098/rsta.2020.0093.

    • Search Google Scholar
    • Export Citation
  • Kramer, K., and J. Ware, 2020: Counting the cost 2020: A year of climate breakdown. Christian Aid Rep., 26 pp., http://www.indiaenvironmentportal.org.in/files/file/Counting%20the%20cost%202020.pdf.

  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Ling, F., J.-J. Luo, Y. Li, T. Tang, L. Bai, W. Ouyang, and T. Yamagata, 2022: Multi-task machine learning improves multi-seasonal prediction of the Indian Ocean Dipole. Nat. Commun., 13, 7681, https://doi.org/10.1038/s41467-022-35412-0.

    • Search Google Scholar
    • Export Citation
  • Liu, S.-Q., 2022: Combined intensity of heat wave events has reached the strongest since 1961 according to BCC. China Meteorological New Press, accessed 21 August 2022, http://www.cma.gov.cn/en2014/news/News/202208/t20220821_5045788.html.

  • Liu, Y.-Y., W.-J. Li, W.-X. Ai, and Q.-Q. Li, 2012: Reconstruction and application of the monthly western Pacific subtropical high indices. J. Appl. Meteor. Sci., 23, 414423.

    • Search Google Scholar
    • Export Citation
  • Loo, Y. Y., L. Billa, and A. Singh, 2015: Effect of climate change on seasonal monsoon in Asia and its impact on the variability of monsoon rainfall in Southeast Asia. Geosci. Front., 6, 817823, https://doi.org/10.1016/j.gsf.2014.02.009.

    • Search Google Scholar
    • Export Citation
  • Lu, R., and B. Dong, 2001: Westward extension of North Pacific subtropical high in summer. J. Meteor. Soc. Japan, 79, 12291241, https://doi.org/10.2151/jmsj.79.1229.

    • Search Google Scholar
    • Export Citation
  • Luo, J.-J., S. Masson, S. K. Behera, and T. Yamagata, 2008: Extended ENSO predictions using a fully coupled ocean–atmosphere model. J. Climate, 21, 8493, https://doi.org/10.1175/2007JCLI1412.1.

    • Search Google Scholar
    • Export Citation
  • Luo, J.-J., C. Yuan, W. Sasaki, S. K. Behera, Y. Masumoto, T. Yamagata, J.-Y. Lee, and S. Masson, 2016: Current status of intraseasonal-seasonal-to-interannual prediction of the Indo-Pacific climate. Indo-Pacific Climate Variability and Predictability, S. K. Behera and T. Yamagata, Eds., World Scientific, 63–107, https://doi.org/10.1142/9789814696623_0003.

  • Miyasaka, T., and H. Nakamura, 2005: Structure and formation mechanisms of the Northern Hemisphere summertime subtropical highs. J. Climate, 18, 50465065, https://doi.org/10.1175/JCLI3599.1.

    • Search Google Scholar
    • Export Citation
  • Mouatadid, S., and Coauthors, 2021: Learned benchmarks for subseasonal forecasting. arXiv, 2109.10399v2, https://doi.org/10.48550/arXiv.2109.10399.

  • Radford, A., L. Metz, and S. Chintala, 2015: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 1511.06434v2, https://doi.org/10.48550/arXiv.1511.06434.

  • Rasp, S., P. D. Dueben, S. Scher, J. A. Weyn, S. Mouatadid, and N. Thuerey, 2020: WeatherBench: A benchmark data set for data‐driven weather forecasting. J. Adv. Model. Earth Syst., 12, e2020MS002203, https://doi.org/10.1029/2020MS002203.

    • Search Google Scholar
    • Export Citation
  • Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat, 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195204, https://doi.org/10.1038/s41586-019-0912-1.

    • Search Google Scholar
    • Export Citation
  • Rodwell, M. J., and B. J. Hoskins, 2001: Subtropical anticyclones and summer monsoons. J. Climate, 14, 31923211, https://doi.org/10.1175/1520-0442(2001)014<3192:SAASM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. 18th Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.

  • Schepen, A., Q.-J. Wang, and D. Robertson, 2012: Evidence for using lagged climate indices to forecast Australian seasonal rainfall. J. Climate, 25, 12301246, https://doi.org/10.1175/JCLI-D-11-00156.1.

    • Search Google Scholar
    • Export Citation
  • Schultz, M. G., C. Betancourt, B. Gong, F. Kleiner, M. Langguth, L. H. Leufen, A. Mozaffari, and S. Stadtler, 2021: Can deep learning beat numerical weather prediction? Philos. Trans. Roy. Soc., A379, 20200097, https://doi.org/10.1098/rsta.2020.0097.

    • Search Google Scholar
    • Export Citation
  • Seager, R., R. Murtugudde, N. Naik, A. Clement, N. Gordon, and J. Miller, 2003: Air–sea interaction and the seasonal cycle of the subtropical anticyclones. J. Climate, 16, 19481966, https://doi.org/10.1175/1520-0442(2003)016<1948:AIATSC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Sheshadri, A., M. Borrus, M. Yoder, and T. Robinson, 2021: Midlatitude error growth in atmospheric GCMs: The role of eddy growth rate. Geophys. Res. Lett., 48, e2021GL096126, https://doi.org/10.1029/2021GL096126.

    • Search Google Scholar
    • Export Citation
  • Shi, X., Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, 2015: Convolutional LSTM network: A machine learning approach for precipitation nowcasting. arXiv, 1506.04214v2, https://doi.org/10.48550/arXiv.1506.04214.

  • Shi, X., Z. Gao, L. Lausen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, 2017: Deep learning for precipitation nowcasting: A benchmark and a new model. arXiv, 1706.03458v2, https://doi.org/10.48550/arXiv.1706.03458.

  • Singh, M., S. B. Vaisakh, N. Acharya, A. Grover, S. A. Rao, B. Kumar, Z.-L. Yang, and D. Niyogi, 2022: Short-range forecasts of global precipitation using deep learning-augmented numerical weather prediction. arXiv, 2206.11669v3, https://doi.org/10.48550/arXiv.2206.11669.

  • Tao, L., X. He, J. Li, and D. Yang, 2021a: A multiscale long short-term memory model with attention mechanism for improving monthly precipitation prediction. J. Hydrol., 602, 126815, https://doi.org/10.1016/j.jhydrol.2021.126815.

    • Search Google Scholar
    • Export Citation
  • Tao, L., X. He, and J. Qin, 2021b: Multiscale teleconnection analysis of monthly total and extreme precipitations in the Yangtze River basin using ensemble empirical mode decomposition. Int. J. Climatol., 41, 348373, https://doi.org/10.1002/joc.6624.

    • Search Google Scholar
    • Export Citation
  • Tao, S.-Y., and S.-Y. Xu, 1962: Some aspects of the circulation during the periods of the persistent drought and flood in Yangtze and Hwai-Ho valleys in summer (in Chinese). Acta Meteor. Sin., 32, 110, https://doi.org/10.11676/qxxb1962.001.

    • Search Google Scholar
    • Export Citation
  • Tao, S.-Y., and K.-F. Zhu, 1964: The 100-mb flow patterns in southern Asia in summer and its relation to the advance and retreat of the west-Pacific subtropical anticyclone over the Far East (in Chinese). Acta Meteor. Sin., 34, 387396, https://doi.org/10.11676/qxxb1964.039.

    • Search Google Scholar
    • Export Citation
  • Ting, M., 1994: Maintenance of northern summer stationary waves in a GCM. J. Atmos. Sci., 51, 32863308, https://doi.org/10.1175/1520-0469(1994)051<3286:MONSSW>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Venkata Ramana, R., B. Krishna, S. R. Kumar, and N. G. Pandey, 2013: Monthly rainfall prediction using wavelet neural network analysis. Water Resour. Manage., 27, 36973711, https://doi.org/10.1007/s11269-013-0374-4.

    • Search Google Scholar
    • Export Citation
  • Wang, B., and Z. Fan, 1999: Choice of South Asian summer monsoon indices. Bull. Amer. Meteor. Soc., 80, 629638, https://doi.org/10.1175/1520-0477(1999)080<0629:COSASM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wang, B., Z. Wu, J. Li, J. Liu, C.-P. Chang, Y. Ding, and G. Wu, 2008: How to measure the strength of the East Asian summer monsoon. J. Climate, 21, 44494463, https://doi.org/10.1175/2008JCLI2183.1.

    • Search Google Scholar
    • Export Citation
  • Wang, B., B. Xiang, and J.-Y. Lee, 2013: Subtropical high predictability establishes a promising way for monsoon and tropical storm predictions. Proc. Natl. Acad. Sci. USA, 110, 27182722, https://doi.org/10.1073/pnas.1214626110.

    • Search Google Scholar
    • Export Citation
  • Wu, G., and Y. Liu, 2003: Summertime quadruplet heating pattern in the subtropics and the associated atmospheric circulation. Geophys. Res. Lett., 30, 1201, https://doi.org/10.1029/2002GL016209.

    • Search Google Scholar
    • Export Citation
  • Yang, J., Q. Liu, S.-P. Xie, Z. Liu, and L. Wu, 2007: Impact of the Indian Ocean SST basin mode on the Asian summer monsoon. Geophys. Res. Lett., 34, L02708, https://doi.org/10.1029/2006GL028571.

    • Search Google Scholar
    • Export Citation
  • Ye, D.-Z., and G.-X. Wu, 1998: The role of the heat source of the Tibetan Plateau in the general circulation. Meteor. Atmos. Phys., 67, 181198, https://doi.org/10.1007/BF01277509.

    • Search Google Scholar
    • Export Citation
  • Ying, W., H. Yan, and J.-J. Luo, 2022: Seasonal predictions of summer precipitation in the middle-lower reaches of the Yangtze River with global and regional models based on NUIST-CFS1.0. Adv. Atmos. Sci., 39, 15611578, https://doi.org/10.1007/s00376-022-1389-7.

    • Search Google Scholar
    • Export Citation
  • Yosinski, J., J. Clune, Y. Bengio, and H. Lipson, 2014: How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst., 27, 33203328.

    • Search Google Scholar
    • Export Citation
  • Yun, K.-S., K.-H. Seo, and K.-J. Ha, 2008: Relationship between ENSO and northward propagating intraseasonal oscillation in the East Asian summer monsoon system. J. Geophys. Res., 113, D14120, https://doi.org/10.1029/2008JD009901.

    • Search Google Scholar
    • Export Citation
  • Zhao, Y.-F., J. Zhu, and Y. Xu, 2014: Establishment and assessment of the grid precipitation datasets in China for recent 50 years (in Chinese). J. Meteor. Sci., 34, 414420.

    • Search Google Scholar
    • Export Citation
  • Zhao, Y.-S., C. Liu, and Q. Du, 2022: Yangtze River basin faces severe drought after experiencing lowest rainfall for six decades. Global Times, 17 August, https://www.globaltimes.cn/page/202208/1273248.shtml.

  • Zhou, B., and H. Wang, 2006: Relationship between the boreal spring Hadley circulation and the summer precipitation in the Yangtze River valley. J. Geophys. Res., 111, D16109, https://doi.org/10.1029/2005JD007006.

    • Search Google Scholar
    • Export Citation
  • Zhou, Z.-Q., S.-P. Xie, and R. Zhang, 2021: Historic Yangtze flooding of 2020 tied to extreme Indian Ocean conditions. Proc. Natl. Acad. Sci. USA, 118, e2022255118, https://doi.org/10.1073/pnas.2022255118.

    • Search Google Scholar
    • Export Citation
  • Zhu, J., F. Kong, L. Ran, and H. Lei, 2015: Bayesian model averaging with stratified sampling for probabilistic quantitative precipitation forecasting in northern China during summer 2010. Mon. Wea. Rev., 143, 36283641, https://doi.org/10.1175/MWR-D-14-00301.1.

    • Search Google Scholar
    • Export Citation
Save
  • Bett, P. E., G. M. Martin, N. Dunstone, A. A. Scaife, H. E. Thornton, and C. Li, 2021: Seasonal rainfall forecasts for the Yangtze River basin in the extreme summer of 2020. Adv. Atmos. Sci., 38, 22122220, https://doi.org/10.1007/s00376-021-1087-x.

    • Search Google Scholar
    • Export Citation
  • Bommasani, R., and Coauthors, 2021: On the opportunities and risks of foundation models. arXiv, 2108.07258v3, https://doi.org/10.48550/arXiv.2108.07258.

  • Caruana, R., 1997: Multitask learning. Mach. Learn., 28, 4175, https://doi.org/10.1023/A:1007379606734.

  • Chen, P., M. P. Hoerling, and R. M. Dole, 2001: The origin of the subtropical anticyclones. J. Atmos. Sci., 58, 18271835, https://doi.org/10.1175/1520-0469(2001)058<1827:TOOTSA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Drosdowsky, W., and L. E. Chambers, 2001: Near-global sea surface temperature anomalies as predictors of Australian seasonal rainfall. J. Climate, 14, 16771687, https://doi.org/10.1175/1520-0442(2001)014<1677:NACNGS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ham, Y.-G., J.-H. Kim, and J.-J. Luo, 2019: Deep learning for multi-year ENSO forecasts. Nature, 573, 568572, https://doi.org/10.1038/s41586-019-1559-7.

    • Search Google Scholar
    • Export Citation
  • He, C., T. Zhou, and B. Wu, 2015: The key oceanic regions responsible for the interannual variability of the western North Pacific subtropical high and associated mechanisms. J. Meteor. Res., 29, 562575, https://doi.org/10.1007/s13351-015-5037-3.

    • Search Google Scholar
    • Export Citation
  • He, J.-Y., J.-Y. Wu, and J.-J. Luo, 2020: Introduction to climate forecast system version 1.0 of Nanjing University of Information Science and Technology (in Chinese). Daqi Kexue Xuebao, 43, 128143.

    • Search Google Scholar
    • Export Citation
  • Hoskins, B., 1996: On the existence and strength of the summer subtropical anticyclones: Bernhard Haurwitz Memorial Lecture. Bull. Amer. Meteor. Soc., 77, 12871292, https://doi.org/10.1175/1520-0477-77.6.1279.

    • Search Google Scholar
    • Export Citation
  • Huang, S.-S., 1963: A study of the longitudinal movement and its forecasting of subtropical anticyclones (in Chinese). Acta Meteor. Sin., 3, 320332, https://doi.org/10.11676/qxxb1963.030.

    • Search Google Scholar
    • Export Citation
  • Jin, W., Y. Luo, T. Wu, X. Huang, W. Xue, and C. Yu, 2022: Deep learning for seasonal precipitation prediction over China. J. Meteor. Res., 36, 271281, https://doi.org/10.1007/s13351-022-1174-7.

    • Search Google Scholar
    • Export Citation
  • Karpatne, A., and V. Kumar, 2017: Big data in climate: Opportunities and challenges for machine learning. 23rd Int. Conf. on Knowledge Discovery Data Mining, Halifax, NS, Canada, Association for Computing Machinery, 21–22, https://doi.org/10.1145/3097983.3105810.

  • Kashinath, K., and Coauthors, 2021: Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. Roy. Soc., A379, 20200093, https://doi.org/10.1098/rsta.2020.0093.

    • Search Google Scholar
    • Export Citation
  • Kramer, K., and J. Ware, 2020: Counting the cost 2020: A year of climate breakdown. Christian Aid Rep., 26 pp., http://www.indiaenvironmentportal.org.in/files/file/Counting%20the%20cost%202020.pdf.

  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Ling, F., J.-J. Luo, Y. Li, T. Tang, L. Bai, W. Ouyang, and T. Yamagata, 2022: Multi-task machine learning improves multi-seasonal prediction of the Indian Ocean Dipole. Nat. Commun., 13, 7681, https://doi.org/10.1038/s41467-022-35412-0.

    • Search Google Scholar
    • Export Citation
  • Liu, S.-Q., 2022: Combined intensity of heat wave events has reached the strongest since 1961 according to BCC. China Meteorological New Press, accessed 21 August 2022, http://www.cma.gov.cn/en2014/news/News/202208/t20220821_5045788.html.

  • Liu, Y.-Y., W.-J. Li, W.-X. Ai, and Q.-Q. Li, 2012: Reconstruction and application of the monthly western Pacific subtropical high indices. J. Appl. Meteor. Sci., 23, 414423.

    • Search Google Scholar
    • Export Citation
  • Loo, Y. Y., L. Billa, and A. Singh, 2015: Effect of climate change on seasonal monsoon in Asia and its impact on the variability of monsoon rainfall in Southeast Asia. Geosci. Front., 6, 817823, https://doi.org/10.1016/j.gsf.2014.02.009.

    • Search Google Scholar
    • Export Citation
  • Lu, R., and B. Dong, 2001: Westward extension of North Pacific subtropical high in summer. J. Meteor. Soc. Japan, 79, 12291241, https://doi.org/10.2151/jmsj.79.1229.

    • Search Google Scholar
    • Export Citation
  • Luo, J.-J., S. Masson, S. K. Behera, and T. Yamagata, 2008: Extended ENSO predictions using a fully coupled ocean–atmosphere model. J. Climate, 21, 8493, https://doi.org/10.1175/2007JCLI1412.1.

    • Search Google Scholar
    • Export Citation
  • Luo, J.-J., C. Yuan, W. Sasaki, S. K. Behera, Y. Masumoto, T. Yamagata, J.-Y. Lee, and S. Masson, 2016: Current status of intraseasonal-seasonal-to-interannual prediction of the Indo-Pacific climate. Indo-Pacific Climate Variability and Predictability, S. K. Behera and T. Yamagata, Eds., World Scientific, 63–107, https://doi.org/10.1142/9789814696623_0003.

  • Miyasaka, T., and H. Nakamura, 2005: Structure and formation mechanisms of the Northern Hemisphere summertime subtropical highs. J. Climate, 18, 50465065, https://doi.org/10.1175/JCLI3599.1.

    • Search Google Scholar
    • Export Citation
  • Mouatadid, S., and Coauthors, 2021: Learned benchmarks for subseasonal forecasting. arXiv, 2109.10399v2, https://doi.org/10.48550/arXiv.2109.10399.

  • Radford, A., L. Metz, and S. Chintala, 2015: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 1511.06434v2, https://doi.org/10.48550/arXiv.1511.06434.

  • Rasp, S., P. D. Dueben, S. Scher, J. A. Weyn, S. Mouatadid, and N. Thuerey, 2020: WeatherBench: A benchmark data set for data‐driven weather forecasting. J. Adv. Model. Earth Syst., 12, e2020MS002203, https://doi.org/10.1029/2020MS002203.

    • Search Google Scholar
    • Export Citation
  • Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat, 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195204, https://doi.org/10.1038/s41586-019-0912-1.

    • Search Google Scholar
    • Export Citation
  • Rodwell, M. J., and B. J. Hoskins, 2001: Subtropical anticyclones and summer monsoons. J. Climate, 14, 31923211, https://doi.org/10.1175/1520-0442(2001)014<3192:SAASM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ronneberger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. 18th Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, N. Navab et al., Eds., Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.

  • Schepen, A., Q.-J. Wang, and D. Robertson, 2012: Evidence for using lagged climate indices to forecast Australian seasonal rainfall. J. Climate, 25, 12301246, https://doi.org/10.1175/JCLI-D-11-00156.1.

    • Search Google Scholar
    • Export Citation
  • Schultz, M. G., C. Betancourt, B. Gong, F. Kleiner, M. Langguth, L. H. Leufen, A. Mozaffari, and S. Stadtler, 2021: Can deep learning beat numerical weather prediction? Philos. Trans. Roy. Soc., A379, 20200097, https://doi.org/10.1098/rsta.2020.0097.

    • Search Google Scholar
    • Export Citation
  • Seager, R., R. Murtugudde, N. Naik, A. Clement, N. Gordon, and J. Miller, 2003: Air–sea interaction and the seasonal cycle of the subtropical anticyclones. J. Climate, 16, 19481966, https://doi.org/10.1175/1520-0442(2003)016<1948:AIATSC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Sheshadri, A., M. Borrus, M. Yoder, and T. Robinson, 2021: Midlatitude error growth in atmospheric GCMs: The role of eddy growth rate. Geophys. Res. Lett., 48, e2021GL096126, https://doi.org/10.1029/2021GL096126.

    • Search Google Scholar
    • Export Citation
  • Shi, X., Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, 2015: Convolutional LSTM network: A machine learning approach for precipitation nowcasting. arXiv, 1506.04214v2, https://doi.org/10.48550/arXiv.1506.04214.

  • Shi, X., Z. Gao, L. Lausen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, 2017: Deep learning for precipitation nowcasting: A benchmark and a new model. arXiv, 1706.03458v2, https://doi.org/10.48550/arXiv.1706.03458.

  • Singh, M., S. B. Vaisakh, N. Acharya, A. Grover, S. A. Rao, B. Kumar, Z.-L. Yang, and D. Niyogi, 2022: Short-range forecasts of global precipitation using deep learning-augmented numerical weather prediction. arXiv, 2206.11669v3, https://doi.org/10.48550/arXiv.2206.11669.

  • Tao, L., X. He, J. Li, and D. Yang, 2021a: A multiscale long short-term memory model with attention mechanism for improving monthly precipitation prediction. J. Hydrol., 602, 126815, https://doi.org/10.1016/j.jhydrol.2021.126815.

    • Search Google Scholar
    • Export Citation
  • Tao, L., X. He, and J. Qin, 2021b: Multiscale teleconnection analysis of monthly total and extreme precipitations in the Yangtze River basin using ensemble empirical mode decomposition. Int. J. Climatol., 41, 348373, https://doi.org/10.1002/joc.6624.

    • Search Google Scholar
    • Export Citation
  • Tao, S.-Y., and S.-Y. Xu, 1962: Some aspects of the circulation during the periods of the persistent drought and flood in Yangtze and Hwai-Ho valleys in summer (in Chinese). Acta Meteor. Sin., 32, 110, https://doi.org/10.11676/qxxb1962.001.

    • Search Google Scholar
    • Export Citation
  • Tao, S.-Y., and K.-F. Zhu, 1964: The 100-mb flow patterns in southern Asia in summer and its relation to the advance and retreat of the west-Pacific subtropical anticyclone over the Far East (in Chinese). Acta Meteor. Sin., 34, 387396, https://doi.org/10.11676/qxxb1964.039.

    • Search Google Scholar
    • Export Citation
  • Ting, M., 1994: Maintenance of northern summer stationary waves in a GCM. J. Atmos. Sci., 51, 32863308, https://doi.org/10.1175/1520-0469(1994)051<3286:MONSSW>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Venkata Ramana, R., B. Krishna, S. R. Kumar, and N. G. Pandey, 2013: Monthly rainfall prediction using wavelet neural network analysis. Water Resour. Manage., 27, 36973711, https://doi.org/10.1007/s11269-013-0374-4.

    • Search Google Scholar
    • Export Citation
  • Wang, B., and Z. Fan, 1999: Choice of South Asian summer monsoon indices. Bull. Amer. Meteor. Soc., 80, 629638, https://doi.org/10.1175/1520-0477(1999)080<0629:COSASM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wang, B., Z. Wu, J. Li, J. Liu, C.-P. Chang, Y. Ding, and G. Wu, 2008: How to measure the strength of the East Asian summer monsoon. J. Climate, 21, 44494463, https://doi.org/10.1175/2008JCLI2183.1.

    • Search Google Scholar
    • Export Citation
  • Wang, B., B. Xiang, and J.-Y. Lee, 2013: Subtropical high predictability establishes a promising way for monsoon and tropical storm predictions. Proc. Natl. Acad. Sci. USA, 110, 27182722, https://doi.org/10.1073/pnas.1214626110.

    • Search Google Scholar
    • Export Citation
  • Wu, G., and Y. Liu, 2003: Summertime quadruplet heating pattern in the subtropics and the associated atmospheric circulation. Geophys. Res. Lett., 30, 1201, https://doi.org/10.1029/2002GL016209.

    • Search Google Scholar
    • Export Citation
  • Yang, J., Q. Liu, S.-P. Xie, Z. Liu, and L. Wu, 2007: Impact of the Indian Ocean SST basin mode on the Asian summer monsoon. Geophys. Res. Lett., 34, L02708, https://doi.org/10.1029/2006GL028571.

    • Search Google Scholar
    • Export Citation
  • Ye, D.-Z., and G.-X. Wu, 1998: The role of the heat source of the Tibetan Plateau in the general circulation. Meteor. Atmos. Phys., 67, 181198, https://doi.org/10.1007/BF01277509.

    • Search Google Scholar
    • Export Citation
  • Ying, W., H. Yan, and J.-J. Luo, 2022: Seasonal predictions of summer precipitation in the middle-lower reaches of the Yangtze River with global and regional models based on NUIST-CFS1.0. Adv. Atmos. Sci., 39, 15611578, https://doi.org/10.1007/s00376-022-1389-7.

    • Search Google Scholar
    • Export Citation
  • Yosinski, J., J. Clune, Y. Bengio, and H. Lipson, 2014: How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst., 27, 33203328.

    • Search Google Scholar
    • Export Citation
  • Yun, K.-S., K.-H. Seo, and K.-J. Ha, 2008: Relationship between ENSO and northward propagating intraseasonal oscillation in the East Asian summer monsoon system. J. Geophys. Res., 113, D14120, https://doi.org/10.1029/2008JD009901.

    • Search Google Scholar
    • Export Citation
  • Zhao, Y.-F., J. Zhu, and Y. Xu, 2014: Establishment and assessment of the grid precipitation datasets in China for recent 50 years (in Chinese). J. Meteor. Sci., 34, 414420.

    • Search Google Scholar
    • Export Citation
  • Zhao, Y.-S., C. Liu, and Q. Du, 2022: Yangtze River basin faces severe drought after experiencing lowest rainfall for six decades. Global Times, 17 August, https://www.globaltimes.cn/page/202208/1273248.shtml.

  • Zhou, B., and H. Wang, 2006: Relationship between the boreal spring Hadley circulation and the summer precipitation in the Yangtze River valley. J. Geophys. Res., 111, D16109, https://doi.org/10.1029/2005JD007006.

    • Search Google Scholar
    • Export Citation
  • Zhou, Z.-Q., S.-P. Xie, and R. Zhang, 2021: Historic Yangtze flooding of 2020 tied to extreme Indian Ocean conditions. Proc. Natl. Acad. Sci. USA, 118, e2022255118, https://doi.org/10.1073/pnas.2022255118.

    • Search Google Scholar
    • Export Citation
  • Zhu, J., F. Kong, L. Ran, and H. Lei, 2015: Bayesian model averaging with stratified sampling for probabilistic quantitative precipitation forecasting in northern China during summer 2010. Mon. Wea. Rev., 143, 36283641, https://doi.org/10.1175/MWR-D-14-00301.1.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    The architecture of the TU-Net used for the precipitation forecast: (a) flowchart of the TU-Net and (b),(c) inner structures of the two U-Net models in the TU-Net. The x, y, z size is provided in parentheses at the top or bottom edge of the boxes. Z500, U500, V500, Z850, U850, and V850 denote geopotential height and zonal and meridional winds at the 500- and 850-hPa levels, respectively.

  • Fig. 2.

    Comparison of 4-month lead precipitation (mm·day−1) prediction skills over the MLYR region (108°–125°E, 25.31°–34.31°N) among TU-Net (red), two-step U-Net without surface temperature ST and oceanic heat content HC (orange), and one-step (yellow) U-Net models in the JJA season during the test period of 1982–2020. (a) Mean latitude-weighted root-mean-square error between the predictions of the three models and observational data. (b) As in (a), but for averaged pattern correlation coefficient skills during 1982–2020. (c) As in (a), but for the correlation coefficient between the time series of predicted and observed precipitation index (i.e., Cor skills).

  • Fig. 3.

    (a)–(d) Correlation coefficients of precipitation anomalies (mm·day−1) between the predictions and the observation in the ablation study in June, July, August, and JJA. Stippling indicates that the correlation exceeds the 95% confidence level. (e)–(h) As in (a)–(d), but for RMSE values divided by the standard deviation of observations (RMSEn).

  • Fig. 4.

    Comparison of the 4-month lead WNPSH prediction skills (i.e., February predicts June and JJA, March predicts July, and April predicts August) over 80°–165°E, 7.5°–47.5°N among the TU-Net and the five dynamical climate models in JJA season, showing RMSEw of the six predicted WNPSH variables, including geopotential height (gpm) and zonal and meridional winds (m·s−1) at the 500- and 850-hPa levels, as labeled.

  • Fig. 5.

    Correlation coefficients between the forecast and observed anomalies of WNPSH variables including (top) geopotential height (gpm) and (middle) zonal and (bottom) meridional winds (m·s−1) at the (a)–(c) 500- and (d)–(f) 850-hPa levels. Stippling indicates that the correlation exceeds the 95% confidence level.

  • Fig. 6.

    The prediction skills of JJA mean precipitation at a 4-month lead (i.e., initiated from February) over the MLYR region based on the TU-Net, NUIST-CFS1.0, and NMME models: (a)–(c) Correlation coefficients between the forecast and observed anomalies of JJA mean precipitation (mm·day−1), and (d)–(f) as in (a)–(c), but for RMSEn values. Also shown are differences of the ACC skill between the TU-Net and dynamical models, giving the TU-Net minus (g) NUIST-CFS1.0 and minus (h) the NMME model, and (i),(j) as in (g) and (h), but for the difference of RMSEn values. Stippling indicates the correlation exceeds the 90% confidence level.

  • Fig. 7.

    As in Fig. 6, but for the August precipitation prediction skills at a 4-month lead (initiated from April).

  • Fig. 8.

    (a) Time series of PCC skills of forecast JJA mean precipitation anomalies at 4-month lead (mm·day−1) over the MLYR region during the period of 1982–2020 for the TU-Net, NUIST-CFS1.0, and the NMME model. The PCC skills averaged over the whole period for the TU-Net, NUIST-CFS1.0, the NMME model, and each model in the NMME project are indicated in the upper-left corner of (a). (b) As in (a), but for the time series of the MLYR precipitation index based on all of the models’ predictions and observations (mm day−1). The Cor skills are also presented in the upper-left corner of (b).

  • Fig. 9.

    (a) RMSEw of the MLYR precipitation anomalies (mm·day−1) for the TU-Net and the dynamical models during the period 1982–2020 in JJA season. (b) As in (a), but for the 39-yr average of PCC skills. (c) As in (a), but for the Cor skills.

  • Fig. 10.

    As in Fig. 8, but for the predictions of the MLYR precipitation anomalies in August.

  • Fig. 11.

    The summer precipitation prediction skills of the TU-Net, NCEP, and JMA dynamical model forecasts at 4-month lead during the period 1993–2016: (a) RMSEw values between the forecast and observed precipitation anomalies (mm·day−1) over the MLYR region, (b) As in (a), but for the 24-yr average of PCC skills, and (c) as in (a), but for the Cor skills.

  • Fig. 12.

    As in Fig. 9, but for the prediction skills of 2-m air temperature anomalies over the MLYR region (°C).

  • Fig. 13.

    As in Fig. 6, but for the prediction skills of JJA mean 2-m air temperature anomalies (°C) at a 4-month lead.

  • Fig. 14.

    As in Fig. 8, but for the predictions of JJA mean anomalies of 2-m air temperature anomalies (°C).