1. Introduction
Natural hazards caused by heavy-precipitation events, i.e., floods, mudslides, and urban waterlogging, have major socioeconomic implications and can cause huge losses (Zhang et al. 2015; Surcel et al. 2017; Zhang and Zhou 2020). As demonstrated by recent studies, the potential for such events to occur is increasing globally (Myhre et al. 2019; Sun et al. 2021; Luo et al. 2022; Xu et al. 2022a). As a result, the demand for high-quality precipitation forecasts to establish early-warning systems and provide emergency services is on the rise (Liu et al. 2018). Despite the remarkable advancements in numerical weather prediction (NWP) in recent decades (Bauer et al. 2015), producing accurate deterministic forecasts of precipitation patterns and intensities remains a challenge (Ran et al. 2018). This is due to the difficulties of handling non-Gaussian data in data assimilation (Sun et al. 2016), imperfect parameterizations of clouds and precipitation (Hong et al. 2004), and the highly nonlinear dependence of multiscale systems within a precipitation event (Gebhardt et al. 2011; Xu et al. 2022b; Yang et al. 2023).
Ensemble prediction systems (EPSs) are a key tool in weather forecasting, enabling the transition from deterministic to probabilistic forecasting to help generate the full probability density function (PDF) of target variables (Majumdar and Torn 2014; Scheuerer et al. 2017). EPSs allow for objective quantification of forecast uncertainty and warning of potential extreme weather events (Barnston et al. 2003; Palmer et al. 2004; Langmack et al. 2012). However, the forecasts generated by a single EPS can be biased and underdispersed due to imperfect model configurations (Buizza et al. 2005; Feng et al. 2020). To address this, postprocessing is required to improve the accuracy of precipitation probability forecasts (Gneiting 2014; Williams et al. 2014; Zhang et al. 2021). Two commonly used parametric methods for postprocessing are Bayesian model averaging (BMA; Raftery et al. 2005; Ji et al. 2019) and ensemble model output statistics (EMOS; Gneiting et al. 2005; Peng et al. 2020). BMA generates a weighted-average PDF by mixing the prior kernel of each individual ensemble according to its prediction skills over a rolling training period (Raftery et al. 2005), while EMOS produces a single parametric PDF with linear projection on the raw ensembles, without predefining the ensemble kernels (Gneiting et al. 2005).
Both BMA and EMOS require assumptions about the statistical distribution of the target variable (Sloughter et al. 2007; Hemri et al. 2014), which can be challenging when dealing with precipitation, which is a nonnegative, skewed distribution (Liu and Xie 2014). The PDF of precipitation is also discontinuous at zero, which is usually addressed by using a piecewise or affine function (Fraley et al. 2010; Scheuerer and Möller 2015). To make it feasible, a BMA method with a mixed gamma distribution was developed by Sloughter et al. (2007), and further improvements were made through censored generalized extreme value (GEV; Scheuerer and Möller 2015) and censored and shifted gamma (CSG; Scheuerer and Hamill 2015) distribution EMOS modeling. Two commonly used scoring methods to estimate the parameters in BMA and EMOS are the continuous ranked probability score (CRPS; Hersbach 2000) and ignorance score (IGN; Gneiting and Raftery 2007). In terms of CRPS and IGN, CSG-EMOS is considered as the best-performing model for precipitation postprocessing (Baran and Nemoda 2016; Scheuerer et al. 2017).
Despite the impressive advancements in the application of BMA and EMOS for precipitation forecasting, studies have shown limited accuracy for heavy-precipitation events (Liu and Xie 2014; Scheuerer and Möller 2015). This is due to an imbalance of sample size between heavy-precipitation events and light or nonprecipitation events within the rolling training period (Ravuri et al. 2021), leading to an underestimation of potential heavy-precipitation events. To address this issue, Ji et al. (2019) proposed a conditional BMA method for precipitation probability forecasting, which split the training samples according to the predicted precipitation intensity by the raw ensemble mean and established conditional BMA models for light-, moderate-, and heavy-precipitation events. Inspired by this approach, we extended the CSG-EMOS model to a conditional one in this paper, with the goal of enhancing its accuracy in precipitation probability forecasting, especially for heavy-precipitation events.
The rest of the paper is structured as follows: section 2 covers the datasets used, data preprocessing, and provides an overview of the CSG-EMOS and Cond-CSG-EMOS methods, as well as the evaluation metrics. The main results of the postprocessing for probabilistic precipitation forecasting are presented in section 3. Finally, summaries and a further discussion are given in section 4.
2. Data and methods
a. EPS data and observations
Ensemble forecasts of 24-h accumulated precipitation produced by the European Centre for Medium-Range Weather Forecasts (ECMWF), the National Centers for Environmental Prediction (NCEP), and the Met Office (UKMO) were collected as the raw EPS data in this paper. The ensembles were obtained from the Interactive Grand Global Ensemble (TIGGE; Bougeault et al. 2010; Swinbank et al. 2016) dataset with an initial time of 1200 UTC and lead times of 24, 48, and 72 h. In each single run, ECMWF, NCEP, and UKMO, respectively, produced 50, 20, and 17 perturbed members with spatial resolutions of 1° × 1°. The whole study period covered 1 March–31 August 2018, and the study region spanned China (15°–59°N, 70°–140°E). Four subregions are split according to the characteristic of climate and precipitation, including Northwest China (NW), Tibet Plateau (TP), North China (NC), and South China (SC), as shown in Fig. 1. The ensemble forecasts used in this paper were downloaded from the ECMWF portal.
Hourly precipitation products merged from rain gauge precipitation data from automatic weather stations, radar quantitative precipitation estimates, and National Oceanic and Atmospheric Administration Climate Prediction Center morphing technique (CMORPH) satellite-retrieved precipitation products (Shen et al. 2014) were used as ground truth. The hourly precipitation data were accumulated every 24 h (1200–1200 UTC). The observations spanned from 1 March to 31 August 2018 and covered the whole study region as the EPS forecasts, but with higher spatial resolutions of 0.1° × 0.1°. The bilinear interpolation was used for the remapping to match the spatiotemporal resolutions of the EPS ensembles. The merged precipitation products were obtained from the China Meteorological Data Service Centre.
b. CSG EMOS
As discussed in section 1, CSG-EMOS serves as an up-to-date parametric method for the statistical postprocessing of ensemble precipitation forecasts (Baran and Nemoda 2016; Scheuerer et al. 2017). In this paper, we implemented the CSG-EMOS model as the baseline following the work of Scheuerer and Hamill (2015). CSG-EMOS is a variant of the conventional EMOS method that is specifically designed for precipitation, whose PDF is discontinuous at zero and follows a nonnegative and skewed distribution (Raftery et al. 2005; Scheuerer and Hamill 2015). The CSG-EMOS scheme aims to generate a single PDF of the target variable that is sharp and well calibrated, based on the raw ensemble forecasts. In our case, the PDF of the precipitation rate is approximately formulated with a right-skewed gamma distribution (Ravuri et al. 2021), which can be basically governed by its shape and scale parameters (Stacy 1962).
c. Conditional CSG EMOS
CSG-EMOS proves efficient in the postprocessing of ensemble precipitation forecasting. However, previous studies (Baran and Nemoda 2016; Javanshiri et al. 2021) point out that the capability in calibrating the heavy-precipitation events using conventional CSG-EMOS is limited. One of the reasons is due to the skewed distribution of the precipitation rate, which causes the sample-imbalance issue. The issue is the imbalance between the sample sizes of the light precipitation and heavy ones. Observations show that the proportion of the heavy-precipitation samples is greatly lower than the nonprecipitation and light precipitation samples (Fig. 2a). It is evident that the parameter estimation in the CSG-EMOS scheme is highly affected by the more nonprecipitation training samples, which can lead to the underestimation of precipitation.
It is noted that the threshold t and the quantile f require adjustments according to the location of the target grid point due to climate variation. For a given subregion, i.e., South China (SC, in Fig. 1), the quantile f is selected as follows. First, the training samples of all the grid points within this subregion are merged together and the observed precipitation rates are ranked, as shown in the histogram of Fig. 2a. Then different quantiles (10th, 20th, …, and 90th) of the corresponding ensemble forecasts are calculated to show which quantile can best represent the real observation (see Figs. 2b–j). In our case, the root-mean-square error between the PDF of the observed precipitation rates (Fig. 2a) and that of the predicted ones at a given quantile (Figs. 2b–j) is used as the metric to select the best-mapped quantile f. Finally, the ensemble forecast at this best-mapped quantile f is used as the prior information to distinguish the light- and heavy-precipitation samples. Likewise, the threshold t should vary with the subregions. The training samples of all the grid points over a given subregion are merged and the 95th quantile of the observed precipitation rates is used as the default threshold. A sensitivity experiment on the choice of the threshold t is given in the following section to present the influence of the threshold on the model calibration. The details of selected thresholds t and best-mapped quantile f for the four subregions are given in Table 1. For inference, a given testing sample is first prejudged as a light- or heavy-precipitation event according to its ensemble forecast at the selected quantile f with the threshold t. Then the estimated parameters of the corresponding trained model are used for the calibration.
Selected thresholds and best-mapped quantiles for different subregions.
To distinguish the original CSG-EMOS method and the conditional one we proposed, the former is hereafter denoted as CSG-EMOS, and the latter as Cond-CSG-CEMOS. Both CSG-EMOS and Cond-CSG-CEMOS were implemented with the help of the EnsembleMOS package in R (Jordan et al. 2017).
d. Verification methods
Furthermore, a reliability diagram is plotted to assess the reliability and sharpness of the forecasts (WMO 2002), which are grouped into bins based on the issued probability. Then, the observed relative frequency for a subgroup (bin) is plotted against the vertical axis. For a perfect match, the forecast probability and the observed frequency should be equal, which means the plotted points lie on the diagonal.
3. Results
The performance of the raw single EPS (ECMWF, NCEP, and UKMO) and grand ensemble forecasts using all the members from different centers (ENS), as well as the calibrations by CSG-EMOS and Cond-CSG-EMOS for ensemble precipitation forecasting, are presented in this section. It is noted that the input ensemble members of CSG-EMOS and Cond-CSG-EMOS are obtained from the all the three EPSs, which are 50 from ECMWF, 20 from NCEP and 17 from UKMO. In consideration of the climate variation across the study area, we evaluated the model performance in individual subregion, including NW, TP, NC, and SC, as shown in Fig. 1. The testing period spanned from 1 July to 31 August 2018.
a. Model performance analysis
Figure 3 presents the model performance of the raw EPSs (ECMWF, NCEP, UKMO, and ENS) and the calibrations produced by the CSG-EMOS and Cond-CSG-EMOS methods in terms of CRPS with lead times of 1–3 days. The CRPSs are averaged over the individual subregion. The boxplots demonstrate the distribution of the scores for each sample in the testing period. Results show that there are noteworthy difference in forecast skills for precipitation between the subregions. The CRPS in South China is substantially higher than that of the other subregions, which indicates that it is more difficult to accurately predict the precipitation events in SC, due to its heavier precipitation in the summertime (as shown in Fig. 1). Among the single EPS, ECMWF serves as the best-performing model for all the lead times and subregions, with lower and slightly narrower CRPS interquartile ranges (see boxes in Fig. 3). NCEP and UKMO are comparable, where NCEP outperforms in TP (see Fig. 3b) and UKMO performs better in NC (see Fig. 3c). The grand ensemble forecast ENS is superior to all the individual EPS. Both CSG-EMOS and Cond-CSG-EMOS are able to further improve the forecasting skills of the raw ensembles (see Figs. 3a–d). Though limited in NC for longer lead times, the improvements of CSG-EMOS and Cond-CSG-EMOS are dramatic in NW, TP, and SC subregions. Compared to CSG-EMOS, Cond-CSG-EMOS obtains lower CRPS medians and interquartile ranges in all the four subregions and at all the lead times, which indicates that the proposed conditional model is more skillful and reliable than the traditional one.
A comparison of the models’ performances for different thresholds of precipitation in terms of BS is given in Fig. 4. The differences across the four subregions are evident, similar to those noted by CRPSs (see Fig. 3). For low-threshold precipitation (∼0.1 mm day−1; Figs. 4a–d), NCEP is superior among the raw ensembles in NW and NC, with lower BSs. However, the performance of NCEP degrades greatly for higher thresholds. ECMWF outperforms the others in TP and SC for thresholds above 10 mm day−1 (see Figs. 4f,j,n,h,l,p), and comparable with UKMO in NW and NC (see Figs. 4e,i,m,g,k,o). Both CSG-EMOS and Cond-CSG-EMOS show improvements in calibrating ensemble precipitation forecasting, especially for low-threshold precipitation (see Figs. 4a–d). Cond-CSG-EMOS model further reduces the forecast errors for moderate (>10 mm day−1, see Figs. 4e–h) and heavy (>25 mm day−1, Figs. 4i–l) precipitation with respect to CSG-EMOS.
Reliability diagrams show that the slopes of the reliability curves of the raw EPS ensembles are less than the diagonal (see Fig. 5 for the lead time of 48 h), which indicates that the EPS forecasts are overconfident for all-threshold precipitation. The calibrations using CSG-EMOS and Cond-CSG-EMOS demonstrate improvements for light precipitation (see Figs. 5a,b). The reliability of CSG-EMOS is degrading with the increasing thresholds of precipitation rates, though still much better than that of the raw ensemble forecasts. Impressively, Figs. 5c and 5d indicate that the Cond-CSG-EMOS model is still able to produce a forecast probability for heavy- and extreme-precipitation events that is similar to the frequency of occurrence.
b. Spatial patterns of models’ forecast skills
The spatial distribution of the CRPS of the raw EPS ensembles is given in Fig. 6. Similar spatial patterns of CRPS are seen among the three EPS ensembles. The CRPS values along the coast are higher than those in inland areas. The difference of thermal properties between land and sea creates a challenge for the NWP models to produce accurate forecasts at the boundary, and frequent and intense precipitation over the coastal regions (see Fig. 1) further adds to the challenge. Among the three EPSs, ECMWF serves as the best-performing model, which obtains lower CRPS values over most of the study region.
Figure 7 gives the spatial distribution of the continuous ranked probability skill score (CRPSS). The raw ECMWF ensembles are used as the reference forecasts in Figs. 7a and 7b to present the improvements of the CSG-EMOS and Cond-CSG-EMOS models, where red colors indicate improvements against the reference forecasts and the blue ones mean degradations. The results show that the performance of CSG-EMOS varies among regions. The CSG-EMOS model is superior in Central China and along the Yangzi River basin, whereas it fails to obtain further improvements in the Beijing–Tianjin–Hebei (in North China) and Pearl River delta (in South China) regions (see Fig. 7a). Cond-CSG-EMOS further reduces the forecast bias and achieves improvements in Central, North, and Northeast China (see Fig. 7b). Figure 7c indicates that Cond-CSG-EMOS outperforms the standard CSG-EMOS over the study region, especially in North China and Northwest China.
Comparisons of the models’ performances for precipitation exceeding different thresholds in terms of Brier skill score (BSS) are presented in Fig. 8. The spatial patterns of BSS with the thresholds of 10 and 25 mm day−1 are close to that in terms of CRPSS. CSG-EMOS reduces the BS over the inland areas in Central China, whereas Cond-CSG-EMOS further obtains improvements in North and Northeast China (see Figs. 8a–f). However, it is noted that the Cond-CSG-EMOS model fails to further improve the forecasts for precipitation exceeding 50 mm day−1 against the standard CSG-EMOS in South China (see Figs. 8g–i). One reason for this result is that the large forecast errors occurring in SC from the raw EPS ensembles (see Fig. 6). The large errors indicate that it is more challenging to accurately predict the precipitation by the raw EPSs, and the prior information from the EPSs to split the light- and heavy-precipitation samples could be wrong. Thereby, a further adjustment on the selected mapping-quantile f and precipitation threshold t is required. Nevertheless, Cond-CSG-EMOS is practical to produce well-calibrated precipitation probability forecasts and has the greatest potential among all other models to capture heavy-precipitation events over the study region.
c. Sensitivity experiments on the precipitation threshold in SC
The results in Fig. 8i suggest that the Cond-CSG-EMOS model performance is related to the selection of the precipitation threshold t. Here, a sensitivity testing on the precipitation threshold in SC is given. As mentioned in section 2c, the 95th quantile of the observed precipitation rate (∼5 mm day−1) over SC during the training period is used as the default precipitation threshold. Experiments using 90th quantile (∼2 mm day−1) and 99th quantile (∼10 mm day−1) are, respectively, performed to show the influence of the precipitation threshold on the model calibration.
A noteworthy difference is seen according to the results (Fig. 9). The model calibration with a precipitation threshold of 99th quantile shows great improvements against the default one (95th quantile) for most of the SC region, in terms of CRPS (see Fig. 9b). It indicates that a higher precipitation threshold is more suitable for SC, which has more high-value precipitation events. One possible reason for the improvements is that the conditional model is more likely to accurately discriminate the light- and heavy-precipitation samples when using a higher percentile, leading to a more adequate parameter estimation.
The BSS results (see Fig. 10) presents more details of the influence with different precipitation thresholds. It demonstrates that the model calibrations for the low-threshold precipitation are similar using different precipitation threshold, whereas a great improvement in reducing the forecast errors for heavy precipitation is seen with a higher threshold (Fig. 10f).
4. Conclusions and discussion
Despite great improvements in the numerical weather prediction in last decades, the ensemble precipitation forecasts from a single EPS are likely to suffer from the underdispersive issue, as well as the large uncertainty and errors (Raftery et al. 2005; Bauer et al. 2015). The statistical postprocessing serves as a cheap but efficient way to reduce the forecast errors and improve the reliability (Wilks and Hamill 2007; Scheuerer and Hamill 2015). However, due to the imbalance between the light- and heavy-precipitation samples, the underestimation of heavy precipitation is often seen using the conventional postprocessing method. In our study, a comparison of original EPSs’ ensemble precipitation forecasting over China with lead times of 1–3 days is first performed. Then a commonly used model, CSG-EMOS, is integrated for the calibration. A conditional CSG EMOS model is further proposed to improve model performance, especially for the heavy-precipitation events.
Our results demonstrate that ECMWF is the best-performing EPS in terms of CRPS for all the lead times and subregions. NCEP and UKMO are comparable, where NCEP is slightly superior in TP and UKMO performs better in NC. Both CSG-EMOS and Cond-CSG-EMOS obtain improvements than individual EPSs while Cond-CSG-EMOS further outperforms the conventional one. The skill scores demonstrate that CSG-EMOS performs well in Central China, but fails to obtain further improvements in NC and SC. Cond-CSG-EMOS is practicable for most of the regions and outperforms the standard CSG-EMOS especially in NC and NW. Furthermore, the BS and reliability diagrams indicate that the Cond-CSG-EMOS model is dominated in the calibration for moderate (>10 mm day−1) and heavy (>25 mm day−1) precipitation.
The sensitivity experiments on the precipitation threshold show that a higher threshold is more suitable for the regions that have heavier-precipitation events, i.e., SC. This finding indicates that it is key to finding a proper threshold to distinguish between the light- and heavy-precipitation events to represent the prior information.
Our results demonstrate that the proposed methods in this study to select the threshold and quantile are skillful for the global EPSs with coarse resolutions. However, it would be a further challenge to make it applicable for higher-resolution regional EPSs that can explicitly represent the location and intensity of convection and storms. Hence, a more careful hyperparameter tuning is required for these models. Instead of adopting a single value for a subregion, the selection of threshold and quantile could be made for each grid point. Furthermore, more sophisticated approaches of hyperparameter selection can be further tested for a better calibration.
Acknowledgments.
The authors acknowledge funding from the National Natural Science Foundation General Program of China (Grant 42275164), the Science and Technology Program of China Southern Power Grid Co., Ltd. (Grant YNKJXM202222172), and the Provincial and Municipal Joint Fund Project of Guizhou Province Meteorological Bureau “Research on AI-based intelligent weather monitoring and dispatching technology.” We thank Yang Lv for preparing the datasets, as well as Ye Tian and Bing Gong for the helpful scientific discussions.
Data availability statement.
The multicenter ensemble forecasts are obtained from the ECMWF portal (https://apps.ecmwf.int/datasets/data/tigge/levtype=sfc/type=pf). The hourly precipitation products are available at the China Meteorological Data Service Centre (http://data.cma.cn).
REFERENCES
Baran, S., and D. Nemoda, 2016: Censored and shifted gamma distribution based EMOS model for probabilistic quantitative precipitation forecasting. Environmetrics, 27, 280–292, https://doi.org/10.1002/env.2391.
Barnston, A. G., S. J. Mason, L. Goddard, D. G. DeWitt, and S. E. Zebiak, 2003: Multimodel ensembling in seasonal climate forecasting at IRI. Bull. Amer. Meteor. Soc., 84, 1783–1796, https://doi.org/10.1175/BAMS-84-12-1783.
Bauer, P., A. Thorpe, and G. Brunet, 2015: The quiet revolution of numerical weather prediction. Nature, 525, 47–55, https://doi.org/10.1038/nature14956.
Bougeault, P., and Coauthors, 2010: The THORPEX Interactive Grand Global Ensemble. Bull. Amer. Meteor. Soc., 91, 1059–1072, https://doi.org/10.1175/2010BAMS2853.1.
Buizza, R., P. Houtekamer, G. Pellerin, Z. Toth, Y. Zhu, and M. Wei, 2005: A comparison of the ECMWF, MSC, and NCEP global ensemble prediction systems. Mon. Wea. Rev., 133, 1076–1097, https://doi.org/10.1175/MWR2905.1.
Feng, J., J. Zhang, Z. Toth, M. Peña, and S. Ravela, 2020: A new measure of ensemble central tendency. Wea. Forecasting, 35, 879–889, https://doi.org/10.1175/WAF-D-19-0213.1.
Fraley, C., A. E. Raftery, and T. Gneiting, 2010: Calibrating multimodel forecast ensembles with exchangeable and missing members using Bayesian model averaging. Mon. Wea. Rev., 138, 190–202, https://doi.org/10.1175/2009MWR3046.1.
Gebhardt, C., S. E. Theis, M. Paulat, and Z. B. Bouallègue, 2011: Uncertainties in COSMO-DE precipitation forecasts introduced by model perturbations and variation of lateral boundaries. Atmos. Res., 100, 168–177, https://doi.org/10.1016/j.atmosres.2010.12.008.
Gneiting, T., 2014: Calibration of medium-range weather forecasts. ECMWF Tech. Memo. 719, 30 pp., https://www.ecmwf.int/sites/default/files/elibrary/2014/9607-calibration-medium-range-weather-forecasts.pdf.
Gneiting, T., and A. E. Raftery, 2007: Strictly proper scoring rules, prediction, and estimation. J. Amer. Stat. Assoc., 102, 359–378, https://doi.org/10.1198/016214506000001437.
Gneiting, T., A. E. Raftery, A. H. Westveld III, and T. Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., 133, 1098–1118, https://doi.org/10.1175/MWR2904.1.
Hemri, S., M. Scheuerer, F. Pappenberger, K. Bogner, and T. Haiden, 2014: Trends in the predictive performance of raw ensemble weather forecasts. Geophys. Res. Lett., 41, 9197–9205, https://doi.org/10.1002/2014GL062472.
Hersbach, H., 2000: Decomposition of the continuous ranked probability score for ensemble prediction systems. Wea. Forecasting, 15, 559–570, https://doi.org/10.1175/1520-0434(2000)015%3C0559:DOTCRP%3E2.0.CO;2.
Hong, S.-Y., J. Dudhia, and S.-H. Chen, 2004: A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation. Mon. Wea. Rev., 132, 103–120, https://doi.org/10.1175/1520-0493(2004)132%3C0103:ARATIM%3E2.0.CO;2.
Javanshiri, Z., M. Fathi, and S. A. Mohammadi, 2021: Comparison of the BMA and EMOS statistical methods for probabilistic quantitative precipitation forecasting. Meteor. Appl., 28, e1974, https://doi.org/10.1002/met.1974.
Ji, L., X. Zhi, S. Zhu, and K. Fraedrich, 2019: Probabilistic precipitation forecasting over East Asia using Bayesian model averaging. Wea. Forecasting, 34, 377–392, https://doi.org/10.1175/WAF-D-18-0093.1.
Jordan, A., F. Krüger, and S. Lerch, 2017: Evaluating probabilistic forecasts with scoringRules. arXiv, 1709.04743v2, https://doi.org/10.48550/arXiv.1709.04743.
Langmack, H., K. Fraedrich, and F. Sielmann, 2012: Tropical cyclone track analog ensemble forecasting in the extended Australian basin: NWP combinations. Quart. J. Roy. Meteor. Soc., 138, 1828–1838, https://doi.org/10.1002/qj.1915.
Liu, C., L. Guo, L. Ye, S. Zhang, Y. Zhao, and T. Song, 2018: A review of advances in China’s flash flood early-warning system. Nat. Hazards, 92, 619–634, https://doi.org/10.1007/s11069-018-3173-7.
Liu, J., and Z. Xie, 2014: BMA probabilistic quantitative precipitation forecasting over the Huaihe basin using TIGGE multimodel ensemble forecasts. Mon. Wea. Rev., 142, 1542–1555, https://doi.org/10.1175/MWR-D-13-00031.1.
Luo, N., Y. Guo, J. Chou, and Z. Gao, 2022: Added value of CMIP6 models over CMIP5 models in simulating the climatological precipitation extremes in China. Int. J. Climatol., 42, 1148–1164, https://doi.org/10.1002/joc.7294.
Majumdar, S. J., and R. D. Torn, 2014: Probabilistic verification of global and mesoscale ensemble forecasts of tropical cyclogenesis. Wea. Forecasting, 29, 1181–1198, https://doi.org/10.1175/WAF-D-14-00028.1.
Myhre, G., and Coauthors, 2019: Frequency of extreme precipitation increases extensively with event rareness under global warming. Sci. Rep., 9, 16063, https://doi.org/10.1038/s41598-019-52277-4.
Palmer, T. N., and Coauthors, 2004: Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER). Bull. Amer. Meteor. Soc., 85, 853–872, https://doi.org/10.1175/BAMS-85-6-853.
Peng, T., X. Zhi, Y. Ji, L. Ji, and Y. Tian, 2020: Prediction skill of extended range 2-m maximum air temperature probabilistic forecasts using machine learning post-processing methods. Atmosphere, 11, 823, https://doi.org/10.3390/atmos11080823.
Raftery, A. E., T. Gneiting, F. Balabdaoui, and M. Polakowski, 2005: Using Bayesian model averaging to calibrate forecast ensembles. Mon. Wea. Rev., 133, 1155–1174, https://doi.org/10.1175/MWR2906.1.
Ran, Q., W. Fu, Y. Liu, T. Li, K. Shi, and B. Sivakumar, 2018: Evaluation of quantitative precipitation predictions by ECMWF, CMA, and UKMO for flood forecasting: Application to two basins in China. Nat. Hazards Rev., 19, 05018003, https://doi.org/10.1061/(ASCE) NH.1527-6996.0000282.
Ravuri, S., and Coauthors, 2021: Skillful precipitation nowcasting using deep generative models of radar. arXiv, 2104.00954v1, https://doi.org/10.48550/arXiv.2104.00954.
Scheuerer, M., and T. M. Hamill, 2015: Statistical postprocessing of ensemble precipitation forecasts by fitting censored, shifted gamma distributions. Mon. Wea. Rev., 143, 4578–4596, https://doi.org/10.1175/MWR-D-15-0061.1.
Scheuerer, M., and D. Möller, 2015: Probabilistic wind speed forecasting on a grid based on ensemble model output statistics. Ann. Appl. Stat., 9, 1328–1349, https://doi.org/10.1214/15-AOAS843.
Scheuerer, M., S. Gregory, T. M. Hamill, and P. E. Shafer, 2017: Probabilistic precipitation-type forecasting based on GEFS ensemble forecasts of vertical temperature profiles. Mon. Wea. Rev., 145, 1401–1412, https://doi.org/10.1175/MWR-D-16-0321.1.
Shen, Y., P. Zhao, Y. Pan, and J. Yu, 2014: A high spatiotemporal gauge-satellite merged precipitation analysis over China. J. Geophys. Res. Atmos., 119, 3063–3075, https://doi.org/10.1002/2013JD020686.
Sloughter, J. M. L., A. E. Raftery, T. Gneiting, and C. Fraley, 2007: Probabilistic quantitative precipitation forecasting using Bayesian model averaging. Mon. Wea. Rev., 135, 3209–3220, https://doi.org/10.1175/MWR3441.1.
Stacy, E. W., 1962: A generalization of the gamma distribution. Ann. Math. Stat., 33, 1187–1192, https://doi.org/10.1214/aoms/1177704481.
Sun, J., H. Wang, W. Tong, Y. Zhang, C.-Y. Lin, and D. Xu, 2016: Comparison of the impacts of momentum control variables on high-resolution variational data assimilation and precipitation forecasting. Mon. Wea. Rev., 144, 149–169, https://doi.org/10.1175/MWR-D-14-00205.1.
Sun, Q., X. Zhang, F. Zwiers, S. Westra, and L. V. Alexander, 2021: A global, continental, and regional analysis of changes in extreme precipitation. J. Climate, 34, 243–258, https://doi.org/10.1175/JCLI-D-19-0892.1.
Surcel, M., I. Zawadzki, M. K. Yau, M. Xue, and F. Kong, 2017: More on the scale dependence of the predictability of precipitation patterns: Extension to the 2009–13 CAPS Spring Experiment ensemble forecasts. Mon. Wea. Rev., 145, 3625–3646, https://doi.org/10.1175/MWR-D-16-0362.1.
Swinbank, R., and Coauthors, 2016: The TIGGE project and its achievements. Bull. Amer. Meteor. Soc., 97, 49–67, https://doi.org/10.1175/BAMS-D-13-00191.1.
Wilks, D. S., and T. M. Hamill, 2007: Comparison of ensemble-MOS methods using GFS reforecasts. Mon. Wea. Rev., 135, 2379–2390, https://doi.org/10.1175/MWR3402.1.
Williams, R. A., C. A. T. Ferro, and F. Kwasniok, 2014: A comparison of ensemble post-processing methods for extreme events. Quart. J. Roy. Meteor. Soc., 140, 1112–1120, https://doi.org/10.1002/qj.2198.
WMO, 2002: Standardized verification system (SVS) for long-range forecasts (LRF). Met Office manual on the global data-processing and forecasting system, WMO-485, WMO, 17 pp., https://www.metoffice.gov.uk/binaries/content/assets/metofficegovuk/pdf/research/climate-science/climate-observations-projections-and-impacts/svslrf.pdf.
Xu, H., H. Chen, and H. Wang, 2022a: Future changes in precipitation extremes across China based on CMIP6 models. Int. J. Climatol., 42, 635–651, https://doi.org/10.1002/joc.7264.
Xu, Z., J. Chen, M. Mu, L. Tao, G. Dai, J. Wang, and Y. Ma, 2022b: A stochastic and nonlinear representation of model uncertainty in a convective-scale ensemble prediction system. Quart. J. Roy. Meteor. Soc., 148, 2507–2531, https://doi.org/10.1002/qj.4322.
Yang, Y., H. Yuan, and W. Chen, 2023: Convection-permitting ensemble forecasts of a double-rainbelt event in South China during the pre-summer rainy season. Atmos. Res., 284, 106599, https://doi.org/10.1016/j.atmosres.2022.106599.
Zhang, J., J. Feng, H. Li, Y. Zhu, X. Zhi, and F. Zhang, 2021: Unified ensemble mean forecasting of tropical cyclones based on the feature-oriented mean method. Wea. Forecasting, 36, 1945–1959, https://doi.org/10.1175/WAF-D-21-0062.1.
Zhang, L., F. Sielmann, K. Fraedrich, X. Zhu, and X. Zhi, 2015: Variability of winter extreme precipitation in Southeast China: Contributions of SST anomalies. Climate Dyn., 45, 2557–2570, https://doi.org/10.1007/s00382-015-2492-6.
Zhang, W., and T. Zhou, 2020: Increasing impacts from extreme precipitation on population over China with global warming. Sci. Bull., 65, 243–252, https://doi.org/10.1016/j.scib.2019.12.002.