The Roles of Chaos Seeding and Multiple Perturbations in Convection-Permitting Ensemble Forecasting over Southern China

Jingzhuo Wang aCMA Earth System Modeling and Prediction Centre, China Meteorological Administration, Beijing, China
bState Key Laboratory of Severe Weather, China Meteorological Administration, Beijing, China

Search for other papers by Jingzhuo Wang in
Current site
Google Scholar
PubMed
Close
,
Jing Chen aCMA Earth System Modeling and Prediction Centre, China Meteorological Administration, Beijing, China
bState Key Laboratory of Severe Weather, China Meteorological Administration, Beijing, China

Search for other papers by Jing Chen in
Current site
Google Scholar
PubMed
Close
,
Hongqi Li aCMA Earth System Modeling and Prediction Centre, China Meteorological Administration, Beijing, China
bState Key Laboratory of Severe Weather, China Meteorological Administration, Beijing, China

Search for other papers by Hongqi Li in
Current site
Google Scholar
PubMed
Close
,
Haile Xue aCMA Earth System Modeling and Prediction Centre, China Meteorological Administration, Beijing, China
bState Key Laboratory of Severe Weather, China Meteorological Administration, Beijing, China

Search for other papers by Haile Xue in
Current site
Google Scholar
PubMed
Close
, and
Zhizhen Xu cDepartment of Atmospheric and Oceanic Sciences, Fudan University, Shanghai, China
dInstitute of Atmospheric Sciences, Fudan University, Shanghai, China
eChinese Academy of Meteorological Sciences, China Meteorological Administration, Beijing, China

Search for other papers by Zhizhen Xu in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

The roles of chaos seeding and multiple perturbations, including model perturbations and topographic perturbations, in convection-permitting ensemble forecasting, are assessed. Six comparison experiments were conducted for 14 heavy rainfall events over southern China. Chaos seeding was run as a benchmark experiment to compare their effects to the intended perturbations. The results first reveal the chaos seeding phenomenon. That is, the tiny and local perturbations of the skin soil moisture propagate into the whole analysis domain within an hour and expand to every prognostic variable, and the perturbations derived from chaos seeding develop when moist convection is active. Second, the chaos seeding has a statistically significant difference from our intended perturbations for the ensemble spread magnitudes of precipitation and the spread–skill relationships and probabilistic forecast skills of dynamical variables. Additionally, for the probabilistic forecasts of precipitation, initial and lateral boundary perturbations and model perturbations can yield statistically larger FSS and AROC scores than chaos seeding; topographic perturbations can only improve FSS and AROC scores a little. The different performances may be related to the different degrees of the real dynamical influence of our intended perturbations. Finally, model perturbations can increase the ensemble spreads of precipitation, and improve FSS and AROC scores of precipitation and the consistency of mid- and low-level dynamical variables. However, the effects of topographic perturbations are small.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jingzhuo Wang, 15510166003@163.com

Abstract

The roles of chaos seeding and multiple perturbations, including model perturbations and topographic perturbations, in convection-permitting ensemble forecasting, are assessed. Six comparison experiments were conducted for 14 heavy rainfall events over southern China. Chaos seeding was run as a benchmark experiment to compare their effects to the intended perturbations. The results first reveal the chaos seeding phenomenon. That is, the tiny and local perturbations of the skin soil moisture propagate into the whole analysis domain within an hour and expand to every prognostic variable, and the perturbations derived from chaos seeding develop when moist convection is active. Second, the chaos seeding has a statistically significant difference from our intended perturbations for the ensemble spread magnitudes of precipitation and the spread–skill relationships and probabilistic forecast skills of dynamical variables. Additionally, for the probabilistic forecasts of precipitation, initial and lateral boundary perturbations and model perturbations can yield statistically larger FSS and AROC scores than chaos seeding; topographic perturbations can only improve FSS and AROC scores a little. The different performances may be related to the different degrees of the real dynamical influence of our intended perturbations. Finally, model perturbations can increase the ensemble spreads of precipitation, and improve FSS and AROC scores of precipitation and the consistency of mid- and low-level dynamical variables. However, the effects of topographic perturbations are small.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jingzhuo Wang, 15510166003@163.com

1. Introduction

Due to the initial errors, model errors, and boundary errors, and the chaotic properties of the atmosphere, NWP models inevitably contain uncertainties (e.g., Lorenz 1963; Zhang et al. 2006). Ensemble forecasting is a useful stochastic dynamic forecasting method to address the forecast uncertainties within single deterministic forecasts, and can provide useful probabilistic distributions (e.g., Tracton and Kalnay 1993; Hopson 2014). As computer power has increased, convection-permitting ensemble prediction systems (CPEPS), with horizontal grid spacings of 2–4 km, have developed rapidly over the last decade. Many meteorological centers have developed operational CPEPSs, such as COSMO-DE-EPS with a horizontal grid spacing of 2.8 km (e.g., Harnisch and Keil 2015), MOGREPS-U.K. with a horizontal grid spacing of 2.2 km (e.g., Golding et al. 2014; Hagelin et al. 2017), and AROME-EPS with a horizontal grid spacing of 2.5 km (e.g., Bouttier et al. 2012) .

Successful CPEPSs should sample errors of different types (e.g., Romine et al. 2014), including initial errors derived from imperfect data assimilation systems and observations, and model errors caused by the uncertainties in physical processes (e.g., Berner et al. 2011). Besides these two error sources, boundary errors must be considered due to the limited-area nature of most CPEPSs (e.g., Saito et al. 2012; Harnisch and Keil 2015), and topographic errors should also be included because the transformation of real topography to model topography is connected with the model resolution, topographic interpolation, and smoothing schemes in numerical models (e.g., Li et al. 2017, 2021). Therefore, how to construct reasonable perturbation methods to describe these error sources in CPEPSs remains a crucial but open issue (e.g., Vié et al. 2011; Johnson and Wang 2016).

Recent studies have used various perturbation methods to construct CPEPSs (e.g., Frogner et al. 2019). Initial perturbations are commonly produced by either dynamical downscaling from global or regional EPSs with large–scale initial perturbations (e.g., Peralta et al. 2012; Kühnlein et al. 2014; Mori et al. 2021), or applying multiscale initial perturbation approaches such as ensemble transform methods (e.g., Fujita et al. 2007; Bishop et al. 2009), ensemble data assimilation methods (e.g., Yussouf et al. 2013; Harnisch and Keil 2015), shifting initialization methods (e.g., Walser et al. 2004), random perturbation methods from the background error covariance (e.g., Torn et al. 2006), and ensemble Jk blending method (e.g., Keresturi et al. 2019). There are many model perturbation methods that sample model errors by employing stochastic perturbations (e.g., Bouttier et al. 2012; Wastl et al. 2019a; Lupo et al. 2020), multiphysics (e.g., Gebhardt et al. 2011), perturbed parameters (e.g., Xu et al. 2020; Fleury et al. 2022), multimodel (e.g., Clark et al. 2011), and hybrid stochastic perturbations (e.g., Wastl et al. 2019b) methods. Moreover, the lateral boundary perturbations are commonly generated from the coarse-resolved global/regional EPSs (e.g., Kunii 2014).

The effects of perturbations of different types on ensemble forecasting are distinct in many aspects, such as forecast lead times, cases under different synoptic forcings, and atmospheric variables. For forecast lead times, many studies have shown that initial perturbations dominate the perturbation development over the first 12 h or longer, and the dominated time depends on the domain size (e.g., Schwartz et al. 2020), whereas lateral boundary perturbations play more important roles at longer ranges, and the effects of lateral boundary conditions should be a function of domain size and error propagation speed (e.g., Peralta et al. 2012). In addition, model perturbations have the most pronounced effects in periods of convective activity (e.g., Kühnlein et al. 2014; Frogner et al. 2019; Wastl et al. 2019a, 2021). For cases under different synoptic forcings, Keil et al. (2014) and Zhang (2021a) suggested that model perturbations improve forecast skill for heavy precipitation under weak forcing conditions, as compared to initial perturbations. Conversely, Surcel et al. (2017) found no close relationship between the impacts of model perturbations and cases under different synoptic forcings. For atmospheric variables, Fujita et al. (2007) indicated that the effects of model perturbations on low-level dewpoint and temperature variables are larger than on wind variables. Nonetheless, previous studies have ignored the forecast uncertainties derived from the orography, or considered the topographic perturbations in the CPEPS. What are the changes when adding topographic perturbations to the initial and lateral boundary perturbations and model perturbations? The above questions require further examinations based on several cases.

Many EPSs apply a combination of multiple perturbations, and some studies suggest that the ensemble spread obtained from multisource perturbations is larger than that generated from only one or two perturbation methods (e.g., Romine et al. 2014; Surcel et al. 2017), and ensemble system with initial and lateral boundary perturbations and model perturbations performs best (e.g., Yang et al. 2023). Conversely, Baker et al. (2014) and Zhang (2021b) suggested that combining model perturbations with initial or lateral boundary perturbations can reduce the ensemble spread of precipitation, and such reduction of ensemble spread may improve forecast skills when initial perturbations lead to overdispersion. Li et al. (2017) concluded that combined perturbations make little contribution to the spatial distribution of ensemble spread. Despite these previous studies, it still remains inconclusive whether combining several kinds of perturbation methods leads to the best forecasting skill? The optimal combinations of perturbation methods also need to be determined.

Additionally, one property of perturbation experiments like those we have performed here is chaos seeding, which is an unrealistic phenomenon where any perturbations made to model prognostic variables may rapidly seed the entire modeling domain. The tiny perturbations (which are unavoidable) are due to model spatial discretization schemes and can grow very rapidly through nonlinear processes and upscale wherever moist physics are active. Some studies have noticed the phenomenon of chaos seeding and found that the areas far from the source of perturbations are also contaminated (e.g., Hohenegger and Schär 2007; Leoncini et al. 2010), However, they attributed the fast diffusion of perturbations to known physical processes, such as gravity, acoustic, and lamb waves, and also ignored the effects of chaos seeding. The only two studies to focus on the impacts of chaos seeding were Hodyss and Majumdar (2007) and Ancell et al. (2018) to our knowledge. Despite the above studies, the chaos seeding has barely been documented in the previous studies of the effects of perturbations of multiple types on ensemble forecasting and brought to the attention of those who are guided by results from such experiments to the best of our knowledge. Since the phenomenon of chaos seeding is unavoidable (and particularly since our experiments focus on precipitation) and may cause misinterpretations of the prescribed perturbations, it is necessary to run a benchmark experiment to compare their effects to those of the intended perturbations to which we ascribe the causes of the results. We hope that this study can help operational and academic communities take into account the chaos seeding when conducting perturbation experiments and have a correct understanding of perturbation results.

In the above context, we compare the effects of chaos seeding to the other-source perturbations. This will then reveal if the perturbation techniques we originally have assessed can explain the differences in results or if chaos seeding prevents that conclusion. And this paper also investigates the effects of multiple perturbations, including model perturbations and topographic perturbations, on both dynamical variables and precipitation for 14 heavy rainfall cases by applying the experimental China Meteorological Administration–Convection Permitting Ensemble Prediction System (CMA–CPEPS). Special attention is paid to southern China, where the occurrence frequency of heavy rainfall is high and the predictability of precipitation is very low because of the influence of complex multiscale weather systems (e.g., Zheng et al. 2016; Luo et al. 2017). It is hoped that this study will be informative for constructing CPEPSs and thereby improving the skill of convective weather forecasting.

This paper is divided into six sections. Section 2 describes the configuration of CMA–CPEPS, experimental design, and analysis methods. The roles of chaos seeding in precipitation forecasts, including their spatial structure and magnitude of ensemble spreads, along with probabilistic forecasts, are given in section 3. Section 4 describes the roles of chaos seeding in dynamical variables. Section 5 gives the roles of multiple perturbations, including model and topographic perturbations, in precipitation and dynamical variables. Section 6 provides a conclusion and some further discussion.

2. Model and methods

a. CMA–CPEPS configuration

Currently in its experimental/non–operational phase of development, CMA–CPEPS relies on the CMA’s Global and Regional Assimilation and Prediction Enhanced System (GRAPES) (e.g., Chen et al. 2008, 2020). The GRAPES system adopts a semi-implicit and semi-Lagrangian scheme for time integration, a fully compressible dynamical core with nonhydrostatic approximations, and a terrain-following coordinate (e.g., Wang et al. 2021a). Table 1 gives the model physics schemes. CMA–CPEPS has a horizontal grid spacing of 3-km, 51 vertical model levels, and consists of 15 ensemble members. The control forecast of CMA–CPEPS is derived from dynamical downscaling of the control forecast of T639 global ensemble prediction system (T639-GEPS) (e.g., Guan and Chen 2008), with no initial and lateral boundary perturbations, model perturbations, or topographic perturbations. The initial and lateral boundary fields of perturbed ensemble members also originate from dynamical downscaling of perturbed members of T639-GEPS. The model uncertainty is generated by applying the stochastically perturbed parameterization tendencies (SPPT) scheme (e.g., Bouttier et al. 2012). The formula of SPPT is the same as Li et al. (2008) and Xu et al. (2020), and more details including settings of specific tuning parameters are based on previous works (e.g., Xu et al. 2020, 2022), which are displayed in Table 2. Furthermore, the data assimilation method implements a cloud analysis scheme, which applies multiple radar, satellite, and sounding data to diagnose the cloud water and precipitation particle, and then uses nudging method to achieve the initialization of cloud information to obtain more precise initial conditions (e.g., Zhu et al. 2017). Note that each ensemble member applies a cloud analysis scheme, and this makes the initial condition spread of clouds small because all members are being pushed toward the same values. The forecast is run for 36 h with a model integration step of 30 s.

Table 1.

The model physics schemes.

Table 1.
Table 2.

Stochastic perturbation parameter settings for SPPT scheme.

Table 2.

Considering the effects of small–scale topography on convective weather forecasts (e.g., Oizumi et al. 2018; Fu et al. 2019; Nishizawa et al. 2021) and the uncertainties involved in the handling of topography of NWP models (e.g., Li et al. 2017), we introduce the standard deviations of topographic heights and then establish the small-scale topographic perturbation schemes, which are used to state the roles of the topography perturbations in the ensemble forecasting. The construction of topographic perturbations was illustrated in Wang et al. (2022, hereafter W2022). First, the filter recommended by Beljaars et al. (2004) was applied to the 3″ GMTED2010 data to filter out the orography with a half-wavelength smaller than 2 km. Second, the filter was applied to the 3″ GMTED2010 data to filter out the orography with a half-wavelength smaller than 10 km. Third, the differences of the two filtered fields, which contain the orography of the half-wavelength between 2 and 10 km, were used to calculate the standard deviations of the topographic heights on circle grids with a radius of 5 km. Finally, we perturbed the standard deviations of the topographic heights [zs_std(λ, ϕ)in Eq. (1)] by random fields that follow a Gaussian distribution with a mean value of 0 and a magnitude within the range of [−1, 1] [ψ(λ, ϕ, t, mem) in Eq. (1)] (e.g., Li et al. 2008) and added them to the topographic heights of control forecast [zs(λ, ϕ)in Eq. (1)] to form the topographic heights of perturbed members [zs_perb(λ, ϕ, t, mem) in Eq. (1)]. In Eq. (1), the variables λ, ϕ, t, and mem represent longitude, latitude, time, and perturbed ensemble members, respectively. Here, we should point out that the standard deviations of the topographic heights are identical across all cases; however, the random fields produced by different random seeds are different for different ensemble members and weather cases:
zs_perb(λ,ϕ,t,mem)=zs(λ,ϕ)+zs_std(λ,ϕ)×ψ(λ,ϕ,t,mem).
Figure 1a shows the simulation domain ranging from 101.1° to 125.7°E and from 13.9° to 30.7°N, and the analysis domain covering southern China (105°–121°E, 20°–28°N) with complex topography, including the Miaoling, Nanling, Wuyi, and Yunkai Mountains. Figure 1b shows the standard deviations calculated from topographic heights of perturbed members with the maximum magnitudes of 163.7 m approximately. The geographic features in Fig. 1b are similar to the raw topography, with small values located in fairly flat or gently sloping terrain and large values distributed in most high-altitude areas. Here, it is also worth noting that as grid spacing becomes progressively finer, topographic uncertainties should also decrease.
Fig. 1.
Fig. 1.

(a) The full model domains and analysis domains outlined by the purple dashed boxes with topographic heights (m). (b) Horizontal distributions of the standard deviations calculated from topographic heights of the perturbed member (m) at 0000 UTC 23 Mar 2019.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

b. Experimental design

To evaluate the roles of chaos seeding and perturbations of different types in convection-permitting ensemble forecasting, we designed six comparison experiments, including the four fundamental experiments and two combined experiments. One of the four fundamental experiments is the chaos seeding experiment, which is regarded as a benchmark (we call it “chaos” for simplicity). The other three fundamental experiments are denoted as Mp, Gp, and IBp for simplicity, representing model perturbations, topographic perturbations, and the combination of initial and lateral boundary perturbations, respectively. In the Mp and Gp experiments, all ensemble members use identical initial and lateral boundary conditions, which come from dynamical downscaling of the control forecast of T639-GEPS. In the chaos seeding experiments, we run an ensemble where small, local Gaussian-distributed initial condition perturbations with a maximum magnitude of 0.01 m3 m−3 are made to the skin soil moisture in a region we are nearly certain should not dynamically affecting our results [in the far downstream corner of the domain (23°N, 120.5°E) denoted as the red dot in Fig. 2], and the ensemble members do not have lateral boundary, model, or topographic perturbations. Figure 2 shows the perturbations of the skin soil moisture between ensemble member 1 and control forecast. These small and local perturbations propagate into the whole analysis domains within an hour. As illustrated in Ancell et al. (2018), the small and speedy perturbations result from the numerical solution of partial differential equations, which are unavoidable. For the fifth-order finite-difference solutions of the CMA–CPEPS models, the seeding speed is about 5Δx (15 km) per time step (30 s) (equal to 1800 km h−1), which is faster than any realistic processes. Additionally, the locations of large perturbations coincide with the large precipitation centers, indicating that the perturbations grow very rapidly wherever moist physics are active. Overall, the chaos seeding experiments provide a baseline no matter what perturbations were introduced and test whether the findings from our intended perturbations are robust.

Fig. 2.
Fig. 2.

The perturbations of the skin soil moisture (shading; m3 m−3) between ensemble member 1 and control forecast at (a) 0-, (b) 1-, (c) 2-, (d) 3-, (e) 4-, and (f) 5-h forecast lead time initialized at 0000 UTC 11 Apr 2019 for the chaos seeding experiment, and the 5-h accumulated precipitation exceeding 15 mm of control forecast (contours; mm). The red dot denotes the location where we added noise for the chaos seeding experiment and the corresponding perturbation differences between ensemble member 1 and the control forecast.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

Based on the IBp experiments, two additional combined experiments were designed as follows: IBp+Mp and IBp+Mp+Gp, in which the combination of initial and lateral boundary perturbations with model perturbations, and the combination of initial and lateral boundary perturbations with model perturbations and topographic perturbations were included, respectively. We should point out that we did not differentiate between the impacts from initial perturbations with those from lateral boundary perturbations; rather, we combined these perturbations together. This is mainly because the topographic perturbations result from the initial and lateral boundary conditions interpolated to different topographic heights, so merely combining the topographic perturbations with the initial perturbations or lateral boundary perturbations alone may result in dynamical inconsistencies between the initial and lateral boundary fields for the perturbed members. Furthermore, owing to the indispensable effects of initial and lateral boundary perturbations in the CPEPS design (e.g., Schwartz et al. 2015; Zhang 2019), all combined perturbations have initial and boundary condition perturbations. Adding model perturbations to initial and lateral boundary perturbations can reveal the roles of model perturbations, and adding topographic perturbations to the combination of initial and lateral boundary perturbations and model perturbations can further display the associated effects of topographic perturbations. The setup of six comparison experiments is summarized in Table 3.

Table 3.

The setup of six comparison experiments.

Table 3.

A total of 14 heavy rainfall events expanding from March to June 2019 over southern China were chosen, including the 23 March; 3, 11, 14, 18, 20, and 26 April; 5, 20, 22, 24, 26, and 28 May; and 6 June cases. These events were initialized at 0000 UTC and ran out to a 36-h lead time. All of the comparison experiments and the following verifications are based on these 14 events.

c. Analysis methods

For illustrating the roles of chaos seeding and perturbations of different types in ensemble forecasting, we conducted verifications in terms of precipitation and dynamical variables, including ensemble spreads and probabilistic forecasts.

1) Ensemble spreads for precipitation

With regard to precipitation, the absolute correlation coefficient (e.g., Hohenegger and Schär 2007) of the horizontal distribution of the ensemble spread is used to illustrate the ensemble spread spatial structure. The domain-averaged ensemble spread (also referred to as the ensemble standard deviation of ensemble member relative to ensemble mean) (e.g., Raynaud and Bouttier 2016), the correspondence ratio (CR; Stensrud and Wandishin 2000), and the normalized variance difference (NVD; Peralta et al. 2012) are used to describe the ensemble spread magnitude. The NVD calculates the ratio of the difference between the ensemble variances of the experiment A (σA2) and experiment B (σB2) with the sum of above two variances, as shown in Eq. (2). A positive NVD shows that the ensemble variance of experiment A is larger than that of experiment B. Additionally, the CR is applied to verify the ensemble spread magnitude in space, and it is defined as the ratio of the grid point numbers in which the events are forecasted by all ensemble numbers (Nall) with the grid point numbers in which the events are forecasted by at least one ensemble member (None) [shown in Eq. (3)]. The lower CR values lead to larger ensemble spread magnitude in space:
NVD=σA2σB2σA2+σB2,
CR=NallNone.

2) Probabilistic forecasts for precipitation

We apply the neighborhood-based techniques to the probabilistic forecast scores (e.g., Mittermaier 2014; Schwartz and Sobash 2017) given our 3-km grid spacing, including the fractions skill score (FSS; Roberts and Lean 2008; Wang et al. 2021b), the relative operating characteristic (ROC) diagrams (e.g., Swets 1986), and the reliability diagrams (e.g., Bröcker and Smith 2007). Here, it is worth stressing that there are two ways which are commonly done for the neighborhood approaches and they have fairly different interpretations (e.g., Schwartz and Sobash 2017) in terms of the spatial scales of the defined events. Thus, it is needed to clearly and thoroughly describe the applications of neighborhood methods first. In this paper, we generated probabilities interpreted as ensemble mean possibilities of event happening at the grid scale. Here we only briefly give the formulations for a better understanding of this method, and the specific illustrations can be inferred from Schwartz and Sobash (2017). First, we converted the precipitation forecasts [f(i, j)] into the binary probability [BP(q)f(i,j)] according to the precipitation thresholds (q) in Eq. (4), where i denotes grid point and j denotes ensemble member. Second, by selecting a neighborhood length scale (NLS) (e.g., Schwartz et al. 2009), which is defined as the side of a square, BP(q)f(i,j) can be converted into a neighborhood probability [NP(q)f(i,j)] at i, which is displayed in Eq. (5), where Nb is defined as the total grid numbers within the neighborhood of the ith grid point. Finally, NEP can be derived by averaging NP over all ensemble members (N), as shown in Eq. (6). Furthermore, observed probabilities at the grid point are mostly binary, except for some metrics like FSS, in which we calculated the fractional observed field at the grid point by using Eqs. (7) and (8), where o(i) is the observed precipitation, and BP(q)o(i) and NP(q)o(i) are the binary probability and neighborhood probability of the observations, respectively:
BP(q)f(i,j)={1iff(i,j)q0iff(i,j)<q},
NP(q)f(i,j)=1Nbk=1NbBP(q)f(k,j),
NEP(q)f(i)=1Nj=1NNP(q)f(i,j),
BP(q)o(i)={1ifo(i)q0ifo(i)<q},
NP(q)o(i)=1Nbk=1NbBP(q)o(k).
Based on the neighborhood-based methods, we can further calculate the mean square error (MSE) [shown in Eq. (9)] and the maximum MSE between the observations and forecasts [shown in Eq. (10)] as well as the FSS [shown in Eq. (11)], where Nx and Ny denote the meridional and zonal grid point numbers. The FSS is a positive-oriented score with a perfect value being 1, illustrating that the forecasts are spatially identical:
MSE=1Nx×Nyi=1Nx×Ny[NEP(q)f(i)NP(q)o(i)]2,
MSEworst=1Nx×Ny[i=1Nx×NyNEP(q)f(i)2+i=1Nx×NyNP(q)o(i)2],
FSS=1MSEMSEworst.
The ROC is a measure of resolution, the ability to distinguish between cases. For calculating the ROC, we first choose a precipitation threshold q and then calculate the values in the 2 × 2 contingency tables (Table 4) according to the criteria in Table 5. Considering the values of Table 4, the probability of false detection [POFD = b/(b + d)] and the probability of detection [POD = a/(a + c)] can be calculated, and ROC can be yielded by drawing the POFD against the POD for each probabilistic threshold p. Here, the probabilistic thresholds range from 1/N to (N − 1)/N, where N stands for the ensemble numbers.
Table 4.

The binary contingency table.

Table 4.
Table 5.

Elements of Table 4. Variables q and p denote the precipitation thresholds and probabilistic forecast thresholds, and O(i) denotes observed values at i grid point.

Table 5.

The reliability diagrams are calculated by first placing the NEP values into the proper bins spanning 0%–6.6667% (6.6667% = 1/N, N denotes the number of ensemble members), 6.6667%–13.3333%, …, 93.3333%–100%. Then, we compute the ratio of the grid point numbers in which observations occur in its corresponding bins to the grid point numbers in which NEP values occur in the proper bins.

3) Ensemble spreads and probabilistic forecasts for dynamical variables

The consistency, which is defined as the ratio of ensemble spread to the RMSE of the ensemble mean (e.g., Wang et al. 2018), and continuous ranked probability score (CRPS; Hersbach 2000) of the probabilistic forecasts are used to verify how EPSs perform for dynamical variables. The consistency is a score for measuring the spread–skill relationship, with a perfect value being 1 (e.g., Wang et al. 2018). The CRPS is applied to calculate the absolute error between the forecast probability and observed frequency. A smaller CRPS score contributes to a better probabilistic forecast skill.

The CMA Multi-source Merged Precipitation Analysis System (CMPAS-V2.1) with a resolution of 5 km (https://data.cma.cn/) (e.g., Pan et al. 2015) is applied for evaluating the precipitation forecasts, and we interpolate the observations to the same model grids by applying the bilinear interpolation. And the CMA gridded analysis, which is derived from the interpolation of the control forecast of T639-GEPS to the 3-km grid, is applied for assessing the dynamical variables. Here, we should mention that as the interpolation from the coarse-resolution models does not add more details, the resolution of the analysis is much coarser than 3-km forecasts.

In addition, we employ a bootstrap resampling method with 1000 replicates to the verification metrics to verify whether the forecast differences between two experiments are statistically significant (e.g., Hamill 1999; Wolff et al. 2014). The significance level is determined as the percentile where the bootstrapped distribution of differences crosses zero (e.g., Griffin et al. 2020). The precipitation scores aggregated over all 3-h forecasts during 0–12-, 12–24-, and 24–36-h forecasts for each case are regarded as one sample, and we totally have 14 samples (cases). Based on the 14 samples, the significance level of the differences of precipitation scores between different experiments can be given.

3. The roles of chaos seeding in precipitation

a. Ensemble spreads

1) spatial structure of ensemble spreads

Figure 3 gives the spatiotemporal evolution of the ensemble spreads of precipitation in the four fundamental experiments. In the chaos seeding experiment, the small initial perturbations of the skin soil moisture propagate into each prognostic variable, leading to some ensemble spreads of precipitation. Additionally, the perturbations develop during the 0–24-h forecast hours in the four fundamental experiments, with the maximum ensemble spreads distributed at the same areas. However, the ensemble spread magnitudes of IBp are obviously larger than those of chaos seeding, Mp, and Gp experiments during the entire forecast hours. Figure 4 further shows the horizontal distribution of ensemble spreads at meso-α (greater than 200 km) and meso-β (20–200 km) scales to reveal the scale characteristics of the four fundamental experiments. The results clearly show that the initial and lateral boundary perturbations (Figs. 4b1,b2), model perturbations (Figs. 4c1,c2), and topographic perturbations (Figs. 4d1,d2) grow in the presence of moist convection, and the perturbation magnitudes of meso-β scales (Figs. 4b2–d2) are larger than those of meso-α scales (Figs. 4b1–d1), which agree with Zhang (2019). Additionally, the perturbations derived from chaos seeding also develop when moist convection is active (Figs. 4a1,a2) and thereby form the similar structures of ensemble spreads as other-source perturbations.

Fig. 3.
Fig. 3.

Horizontal distribution of ensemble spreads of (a1),(b1),(c1),(d1) 0–3-h; (a2),(b2),(c2),(d2) 3–6-; (a3),(b3),(c3),(d3) 9–12-; and (a4),(b4),(c4),(d4) 21–24-h accumulated precipitation (mm) in the chaos seeding experiment in (a1)–(a4), the IBp experiment in (b1)–(b4), the Mp experiment in (c1)–(c4), and the Gp experiment in (d1)–(d4). The results are the average of 14 precipitation events.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

Fig. 4.
Fig. 4.

Horizontal distribution of ensemble spreads of 21–24-h accumulated precipitation (mm) at (left) meso-α and (right) meso-β scales in the (a1),(a2) chaos seeding; (b1),(b2) IBp; (c1),(c2) Mp; and (d1),(d2) Gp experiments, respectively. The occurrence frequency of 3-h accumulated precipitation exceeding 25 mm during the 36-h control forecast is shown (purple contour shows the values of 3%). The results are the average of 14 precipitation events.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

Based on the horizontal distributions of ensemble spreads, Fig. 5 gives the absolute correlation coefficients of the ensemble spreads between the chaos seeding experiments and IBp, Mp, and Gp experiments. The results show that the absolute correlation coefficients have a maximum value of 0.88 between chaos seeding and Gp experiments at 21-h forecast lead time, and a minimum value of 0.455 between chaos seeding and IBp experiments at 36-h forecast lead time. That is to say, the chaos seeding experiments can yield similar spatial structure of ensemble spreads as other experiments. Above results indicate the nature of the problem—the ensemble spreads of precipitation may be originating from the chaos seeding of numerical noise. That is not to say our intended perturbations, including initial and lateral boundary perturbations, model perturbations, and topographic perturbations, have no realistic effects. The crucial question is, can the perturbations derived from the chaos seeding experiments be large enough to impact our intended perturbations?

Fig. 5.
Fig. 5.

Evolution of the absolute correlation coefficients of the horizontal distribution of ensemble spreads for 3-h accumulated precipitation with forecast lead time for the average of 14 precipitation events. Different lines represent the correlation coefficients between different experiments: green represents chaos seeding and Gp, red represents chaos seeding and Mp, and black represents chaos seeding and IBp.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

2) magnitude of ensemble spreads

To reveal the perturbation magnitudes, Fig. 6a exhibits the domain-averaged ensemble spread magnitudes for the four fundamental experiments. By comparing the chaos seeding with other three fundamental experiments, we find that the ensemble spreads of chaos seeding are the smallest (red line in Fig. 6a), followed by topographic perturbations (green line in Fig. 6a), model perturbations (black line in Fig. 6a), and last initial and lateral boundary perturbations (blue line in Fig. 6a). The significance level of the differences between chaos seeding and other three fundamental experiments all exceeds 90% at all forecast lengths, demonstrating that the ensemble spread magnitudes induced by chaos seeding alone are a different proportion of other types of perturbations. Therefore, although the ensemble spread spatial structures of chaos seeding experiments appear deceptively similar as those of other three fundamental experiments, chaos seeding produces less spread than the other perturbation methods, which indicates chaos seeding by itself cannot be the reason for the spread in the perturbation experiments. That is to say, the initial and lateral boundary perturbations, model perturbations, and topographic perturbations have different degrees of the real dynamical influence. However, if we do not know the phenomenon of chaos seeding, we could not interpret whether the perturbation growth is realistic. Generally, we hope that this study can help researchers pay more attention to the chaos seeding phenomenon and remove the possible misinterpretations of the intended perturbations.

Fig. 6.
Fig. 6.

Evolution of (a) domain-averaged ensemble spread magnitude (mm) and (b) the correspondence ratio (CR) for 3-h accumulated precipitation with forecast lead time for the average of 14 precipitation events. A threshold of 0.1 mm (3 h)−1 is applied to preclude dry events for calculating CR. The different lines represent different experiments: the red line represents chaos seeding experiments, the blue line represents IBp, the black line represents Mp, and the green line represents Gp. Marks on the upper axis show the forecast lengths when the significance level of the ensemble spread and CR differences between different experiments is larger than 90%, with red circles denoting the comparison between Gp and chaos seeding, red crisscrosses denoting Mp and chaos seeding, and red triangles denoting IBp and chaos seeding.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

The correspondence ratio (CR) metrics are considered to be also useful for measuring ensemble spread magnitudes in space. Figure 6b shows the averaged CR values for 3-h accumulated precipitation. A threshold of 3 h accumulated precipitation exceeding 0.1 mm is applied to exclude dry events. Among the four fundamental experiments, the chaos seeding experiments (red line in Fig. 6b) have the largest CR values with the smallest ensemble spread magnitudes in space, and topographic perturbations (green line in Fig. 6b), model perturbations (black line in Fig. 6b), and initial and lateral boundary perturbations (blue line in Fig. 6b) have the second largest, the third largest, and the smallest CR values, respectively. And the chaos seeding experiments also have statistically significant CR differences from other-source perturbations, indicating that although the chaos seeding phenomenon leads to some ensemble spreads in space, chaos seeding by itself cannot be the reason for the CR values in the perturbation experiments. The results of CR agree with those of ensemble spread magnitudes.

b. Probabilistic forecasts

Above we have mentioned that the neighborhood-based probabilities are interpreted as ensemble mean possibilities of event occurrence at the grid scale given a neighborhood length scale. We have examined the effects of multiple neighborhood length scales on probabilistic forecasts and found that FSS increases as the neighborhood length scale increases at all forecast lead times and precipitation thresholds, and the areas under ROC diagrams (AROC) (e.g., Mason 1982) and reliability diagrams behave more complex than FSS as the neighborhood length scale increases. Typically, reliability improves as the neighborhood length scale increases up to a certain point, and then reliability may get saturated. Additionally, AROC improves as the neighborhood length scale increases up to a certain point, and then AROC may get saturated or even degrade due to the sharpness loss (figures not shown). For brevity, we only show figures for one neighborhood length scale. As for what length scale to actually use, there is no perfect answers. However, Roberts and Lean (2008) showed that the minimum useful scale is the neighborhood length scale at which FSS ≈ 0.5. And we also considered that a larger neighborhood length would unavoidably lead to so much smoothing that relevant storm-scale features are lost and make reliability and AROC get saturated or even degrade. Considering the above reasons, we used the neighborhood length scales of 35 times grid spacings to make FSSs larger than 0.5 at most forecast lead times and thresholds.

Figure 7 gives the FSS and AROC scores of 3-h accumulated precipitation at 0.1-, 3-, and 10-mm precipitation thresholds. First, the comparisons between IBp (blue columns in Fig. 7) and chaos seeding (red columns in Fig. 7), and Mp (black columns in Fig. 7) and chaos seeding, reveal that initial and lateral boundary perturbations and model perturbations can yield larger FSS and AROC scores than chaos seeding, with the FSS and AROC differences achieving statistical significance at the 90% level for most precipitation thresholds and forecast lead times (Tables 6 and 7). Second, FSS differences between chaos seeding and Gp experiments (green columns in Fig. 7) are only statistically significant at the 90% significance level at 3- and 10-mm precipitation thresholds at 24–36-h forecast lead time (Table 8), and AROC differences are never statistically significant at the 90% level, illustrating that topographic perturbations can only improve FSS and AROC scores a little compared to chaos seeding experiments. The reason why the topographic perturbations were not usually statistically significantly different from chaos seeding may be that the chaos seeding do not reflect the real dynamics/physics of the atmosphere, and topographic perturbations have only small degrees of the real dynamical influence. Conversely, the initial and lateral boundary perturbations and model perturbations are actually weather dependent, which have large degrees of the real dynamical influence.

Fig. 7.
Fig. 7.

(top) The fractions skill score (FSS) and (bottom) the area under ROC diagrams (AROC) for 3-h accumulated precipitation with the neighborhood length scales of 35 times grid spacings at the different precipitation thresholds: (a1),(b1) 0.1, (a2),(b2) 3, and (a3),(b3) 10 mm, aggregated over all 3-h forecasts during 0–12-, 12–24-, and 24–36-h forecast lead time for each case and averaged over 14 precipitation cases. The different colors represent different experiments: red represents chaos seeding experiments, blue represents IBp, black represents Mp, and green represents Gp.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

Table 6.

The significance level of the FSS and AROC differences, and the significance level of the differences of the absolute differences between the areas under diagonal line and the areas under reliability diagrams across all bins between IBp and chaos seeding experiments at 0.1-, 3-, and 10-mm precipitation thresholds, aggregated over all 3-h forecasts during 0–12-, 12–24-, and 24–36-h forecast lead time for each case. Note that what we are showing in Tables 68 are instances where significance levels are at least 75%, at least 80%, at least 85%, and at least 90%. The FSS, AROC, and reliability diagrams show the neighborhood length scale (NLS) of 35 times grid spacings. The “×” indicates that the significance level is smaller than 75%, and the bold and italic fonts indicate that the significance level is at least 90%. The statistical significance was not calculated unless two groups of comparison experiments had larger than 500 grid points within the particular forecast probability bins when calculating the reliability.

Table 6.
Table 7.

As in Table 6, but for the significance level of the metric differences between Mp and chaos seeding experiments.

Table 7.
Table 8.

As in Table 6, but for the significance level of the metric differences between Gp and chaos seeding experiments.

Table 8.

Figure 8 shows the reliability diagrams of 3-h accumulated precipitation with the neighborhood length scale of 35 times grid spacings. The reliability gets worse and more overconfident with the forecast lead time, especially for the high precipitation thresholds (Fig. 8c3). Additionally, the comparisons between the four fundamental experiments reveal that initial and lateral boundary perturbations (blue line in Fig. 8), model perturbations (black line in Fig. 8), and topographic perturbations (green line in Fig. 8) have a little better reliability than chaos seeding experiments (red line in Fig. 8), although significant differences occur rarely (Tables 68). In all, the differences of reliability diagrams between chaos seeding and our intended perturbations are smaller than those of FSS and AROC. That is to say, unlike most metrics, the reliability results may be induced by chaos seeding phenomenon.

Fig. 8.
Fig. 8.

The reliability diagrams for 3-h accumulated precipitation with the neighborhood length scales of 35 times grid spacings at the different precipitation thresholds: (left) 0.1, (center) 3, and (right) 10 mm, aggregated over all 3-h forecasts during (a1)–(a3) 0–12-, (b1)–(b3) 12–24-, (c1)–(c3) 24–36-h forecast lead time for each case and averaged over 14 precipitation cases. The different colors represent different experiments: the red line represents chaos seeding experiments, the blue line represents IBp, the black line represents Mp, the green line represents Gp, and the black dashed line represents the perfect reliability. Note that values are not drawn for a particular bin if there were smaller than 500 grid points with forecast probabilities in that bin over all 14 precipitation cases and verification domains.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

4. The roles of chaos seeding in dynamical variables

Figure 9 gives the evolution of the domain–averaged consistency and CRPS with forecast hour for zonal wind. First, the consistency is smaller than 1 at most forecast lengths, indicating that the ensemble members are underdispersive. The comparisons between the four fundamental experiments exhibit that initial and lateral boundary perturbations (blue line in Fig. 9) have the best consistency scores with improved dispersions, model perturbations (black line in Fig. 9) and topographic perturbations (green line in Fig. 9) follow, and the chaos seeding experiments (red line in Fig. 9) have the most significant level of underdispersion. Furthermore, the consistency scores of chaos seeding experiments are significantly different from those of IBp, Mp, and Gp experiments (red marks in Fig. 9). Second, the CRPS scores of chaos seeding experiments are higher (i.e., higher probabilistic forecast errors and worse probabilistic forecast skill) than those of IBp, Mp, and Gp experiments, with the CRPS differences between chaos seeding and other three fundamental experiments being statistically significant at the 90% significance level at almost all forecast lengths. And other dynamical variables behave similar as zonal wind (not shown). To sum up, chaos seeding has statistically significant differences from our intended perturbations in terms of the ensemble spreads and probabilistic forecast skill of dynamical variables. Chaos seeding by itself cannot be the reason for the consistency and CRPS scores of dynamical variables in our intended perturbations.

Fig. 9.
Fig. 9.

Evolution of the (top) domain-averaged consistency between the ensemble spread and ensemble mean RMSE and (bottom) the continuous ranked probability score (CRPS) with forecast lead time for (a1),(b1) 200-, (a2),(b2) 500-, and (a3),(b3) 850-hPa zonal wind. The different lines represent different experiments: the red line represents chaos seeding experiments, the blue line represents IBp, the black line represents Mp, and the green line represents Gp. Marks on the upper axis show the forecast hours when the significance level of the consistency and CRPS differences between different experiments is larger than 90%, with red circles denoting the comparison between chaos seeding and Gp, red crisscrosses denoting chaos seeding and Mp, and red triangles denoting chaos seeding and IBp. The results are the average of 14 precipitation events.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

5. The roles of model and topographic perturbations in precipitation and dynamical variables

a. The roles of model and topographic perturbations in precipitation

1) Ensemble spreads

To reveal the effects of model and topographic perturbations on perturbation spatial structure of precipitation, the absolute correlation coefficients of the spatial distribution of ensemble spreads between different experiments are presented in Fig. 10a. The correlation coefficients of the ensemble spreads between IBp+Mp and IBp experiments (red line in Fig. 10a) are similar to those between IBp+Mp+Gp and IBp+Mp experiments (blue line in Fig. 10a). This reveals that the effects of model perturbations and topographic perturbations on perturbation structure of precipitation are comparable, and the influences increase with the forecast hour.

Fig. 10.
Fig. 10.

(a) As in Fig. 5, but for the correlation coefficients between different experiments. (b) Evolution of the normalized variance difference for 3-h accumulated precipitation with forecast hour between different experiments. The different lines represent the metrics calculated from different experiments: the red line represents IBp+Mp and IBp and the blue line represents IBp+Mp+Gp and IBp+Mp. The results are the average of 14 precipitation events.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

Having revealed the perturbation spatial structure, we further analyze the effects of model and topographic perturbations on ensemble spread magnitudes by using the NVD metrics, as displayed in Fig. 10b. The NVD values calculated from IBp+Mp and IBp (red line in Fig. 10b) are larger than 0 at all forecast hours, demonstrating that model perturbations can increase ensemble spread magnitudes. Since the current operational ensemble prediction systems around the world are underdispersive (i.e., ensemble mean forecast error is significantly larger than ensemble spread) (e.g., McCollor and Stull 2009; García-Moya et al. 2011; Wang et al. 2018), more spreads may be desirable. Additionally, the impacts of topographic perturbations on ensemble spread magnitudes are overall small (blue line in Fig. 10b), with NVD values near 0, and the effects are mainly concentrated on the first 0–6 h.

2) Probabilistic forecasts

Figure 11 shows the FSS, AROC, and reliability diagrams for IBp, IBp+Mp, and IBp+Mp+Gp experiments. First, the FSS and AROC scores of IBp+Mp (blue columns in Fig. 11) are larger than IBp (red columns in Fig. 11) at most forecast lengths, with the FSS differences being statistically significant at the 90% significance level at the first 24-h forecast hours for all precipitation thresholds and the AROC differences being statistically significant at the 90% significance level at most forecast lengths and precipitation thresholds (red marks in Fig. 11), implying that model perturbations can improve FSS and AROC scores. Second, the comparisons between IBp+Mp (blue columns in Fig. 11) and IBp+Mp+Gp (black columns in Fig. 11) show that topographic perturbations have little effects on FSS and AROC scores, with the differences having not passed the statistical significance test at the 90% level. Finally, both model perturbations and topographic perturbations have little impact on the reliability diagrams.

Fig. 11.
Fig. 11.

(top) The fractions skill score (FSS), (middle) the area under ROC diagrams (AROC), and (bottom) the reliability diagrams with the neighborhood length scale of 35 times grid spacings at the different precipitation thresholds: (a1)–(c1) 0.1, (a2)–(c2) 3, (a3)–(c3) 10 mm, aggregated over all 3-h forecasts during 0–12-, 12–24-, and 24–36-h forecast lead time for each case and averaged over 14 precipitation events. The reliability diagrams only display the results aggregated over 3-h forecasts during 0–12-h for each case and averaged over 14 precipitation events and the red and blue lines are obscured by the black lines in (c1) and (c3). The different colors represent different experiments: red represents IBp experiments, blue represents IBp+Mp, black represents IBp+Mp+Gp, and black dashed lines in (c1)–(c3) represent the perfect reliability. Marks show whether the differences of FSS, AROC scores, and reliability diagrams between different experiments are statistically significant at the 90th percentile, with red circles denoting the comparison between IBp+Mp and IBp, and red crisscrosses denoting IBp+Mp+Gp and IBp+Mp.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

b. The roles of model and topographic perturbations in dynamical variables

Figure 12 shows the consistency and CRPS of IBp, IBp+Mp, and IBp+Mp+Gp experiments. The inclusion of model perturbations can improve the consistency of mid- and low-level variables, especially for latter forecast hours (Figs. 12a2,a3), with the consistency differences having passed the statistically significant tests at the 90% level (red circles in Figs. 12a2–a3). And IBp+Mp can yield smaller CRPS scores in comparison with IBp for mid- and low-level variables at some forecast lengths (red circles in Figs. 12b2,b3), demonstrating that model perturbation can improve probabilistic forecast skills. Furthermore, topographic perturbations can improve the spread–skill relationships and CRPS little, with the significant differences concentrating on only a few forecast hours and variables.

Fig. 12.
Fig. 12.

As in Fig. 9, but for (a1),(b1) 200-, (a2),(b2) 500-, and (a3),(b3) 850-hPa zonal wind. The different lines represent different experiments: the red line represents IBp, the blue line represents IBp+Mp, and the black line represents IBp+Mp+Gp. Black lines are obscured by the blue lines at most forecast lead times. Marks on the upper axis show the forecast hours when the significance level of the consistency and CRPS differences between different experiments is larger than 90%, with red circles denoting the comparison between IBp+Mp and IBp, and red crisscrosses denoting IBp+Mp+Gp and IBp+Mp. The results are the average of 14 precipitation events.

Citation: Weather and Forecasting 38, 9; 10.1175/WAF-D-22-0177.1

6. Summary and discussion

To construct reasonable perturbation methods to consider error sources in CPEPSs and improve the skill of convective weather forecasts, it is vital to gain insight into the effects of multiple perturbations. With this motivation, the present study investigated the roles of chaos seeding and perturbations of multiple types, including model perturbations and topographic perturbations, in convection-permitting ensemble forecasting by using an experimental system (viz., CMA–CPEPS). The chaos seeding experiment was regarded as a benchmark to compare their effects to the intended perturbations to which we ascribe the causes of the results. Six comparison experiments were conducted for 14 heavy rainfall events over southern China. The core results are concluded as follows:

  1. In the chaos seeding experiment, the tiny and local perturbations of the skin soil moisture propagate into the whole analysis domain within an hour with the propagation speed faster than any realistic processes, and also expand to every prognostic variable, leading to some ensemble spreads of precipitation. Additionally, the perturbations derived from chaos seeding develop rapidly when moist convection is active and thereby yield the similar perturbation structures as our intended perturbations. Above results reveal the chaos seeding phenomenon of numerical noise.

  2. The comparisons between chaos seeding and IBp, Mp, and Gp can reveal whether the perturbations derived from chaos seeding can impact our intended perturbations. First, for ensemble spreads of precipitation, although the chaos seeding experiments exhibit similar spatial structure of ensemble spreads as IBp, Mp, and Gp experiments, the ensemble spread magnitudes of chaos seeding experiments are statistically different from those of other three fundamental experiments, implying that the chaos seeding by itself cannot be the reason for the ensemble spreads in our intended perturbations; Second, for probabilistic forecasts of precipitation, initial and lateral boundary perturbations as well as model perturbations have significantly larger FSS and AROC scores and a little better reliability than chaos seeding experiments. And topographic perturbations can only improve FSS, AROC, and reliability a little compared to chaos seeding experiments. The different performances of initial and lateral boundary perturbations, model perturbations, and topographic perturbations relative to the chaos seeding may be that the chaos seeding do not reflect the real dynamics of the atmosphere, and the initial and lateral boundary perturbations, model perturbations, and topographic perturbations have different degrees of the real dynamical influence. Third, the spread–skill relationships and probabilistic forecast skills of dynamical variables of chaos seeding are significantly different from those of our intended perturbations.

  3. The inclusion of model perturbations in initial and lateral boundary perturbations indicates that the effects of model perturbations on ensemble spread magnitudes and structures are large, and model perturbations can improve FSS and AROC scores of precipitation and the consistency of mid- and low-level dynamical variables, and can improve probabilistic forecast skills of dynamical variables. The inclusion of topographic perturbations in the combination of initial and lateral boundary perturbations and model perturbations shows that the topographic perturbations have large impacts on perturbation structure of precipitation, but have small effects on spread magnitudes. Furthermore, the topographic perturbations have little effects on FSS, AROC, and reliability diagrams of precipitation, and the spread–skill relationships and CRPS of dynamical variables. In all, the topographic perturbations only have small impacts. Given the advantages of the IBp+Mp experiments over other experiments, it is recommended that the design of CMA–CPEPS combines the initial and lateral boundary perturbations and model perturbations.

However, owing to the large computational expense of conducting ensemble experiments, we only select 14 cases to investigate the results in our study. In the future, with large improvements in computer resources, we should expand these studies to more cases to separate the associated effects from strong-forcing cases to weak-forcing cases. Second, the impacts of topographic uncertainties would likely be confined to certain geographic areas and model configures, and our findings concerning topographic uncertainties are only presented in southern China. The roles of topographic perturbations in ensemble forecasting over other regions should also be studied in the future. Third, how do the different parameter settings of SPPT and different Mp runs play a role in the reported results? Finally, the chaos seeding phenomenon may lead to the misinterpretations of the perturbation experiments. Ancell et al. (2018) proposed three methods to mitigate the misinterpretations caused from chaos seeding, including ensemble sensitivity analysis, empirical orthogonal analysis, and the use of double precision. However, in this study, we only first attempted to focus on chaos seeding phenomenon and compared the chaos seeding with our intended perturbations to reveal whether the chaos seeding influences our results. The methods to mitigate the misinterpretations induced by chaos seeding and discriminate realistic impacts from chaos seeding remain to be further studied in our ongoing work. Systematically studying all of these aspects should help in designing the operational CMA–CPEPS and improving the skill of convective weather forecasts.

Overall, we hope that this study can help researches focus on the chaos seeding phenomenon when conducting the studies of the effects of different perturbations on ensemble forecasting and also provide a complementary understanding of the effects of model perturbations and topographic perturbations on the quality of the CPEPS relative to studies.

Acknowledgments.

We would like to express our great appreciation for three anonymous reviewers’ constructive and valuable comments in guiding the content, which have all been very helpful for revising and improving our manuscript. And we are grateful to the editor for patiently reviewing our manuscript and giving us warm encouragement. This work is sponsored by the National Natural Science Foundation of China (Grant 42105154), and the National Science and Technology Major Project of the Ministry of Science and Technology of China (Grant 2018YFC1507405).

Data availability statement.

The CMA Multi–source Merged Precipitation Analysis System (CMPAS-V2.1) data are available online in archives hosted by the National Meteorological Information Center of China Meteorological Administration.

REFERENCES

  • Ancell, B. C., A. Bogusz, M. J. Lauridsen, and C. J. Nauert, 2018: Seeding chaos: The dire consequences of numerical noise in NWP perturbation experiments. Bull. Amer. Meteor. Soc., 99, 615628, https://doi.org/10.1175/BAMS-D-17-0129.1.

    • Search Google Scholar
    • Export Citation
  • Baker, L. H., A. C. Rudd, S. Migliorini, and R. N. Bannister, 2014: Representation of model error in a convective-scale ensemble prediction system. Nonlinear Processes Geophys., 21, 1939, https://doi.org/10.5194/npg-21-19-2014.

    • Search Google Scholar
    • Export Citation
  • Beljaars, A. C. M., A. R. Brown, and N. Wood, 2004: A new parameterization of turbulent orographic form drag. Quart. J. Roy. Meteor. Soc., 130, 13271347, https://doi.org/10.1256/qj.03.73.

    • Search Google Scholar
    • Export Citation
  • Berner, J., S.-Y. Ha, J. P. Hacker, A. Fournier, and C. Snyder, 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 19721995, https://doi.org/10.1175/2010MWR3595.1.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., T. R. Holt, J. Nachamkin, S. Chen, J. G. McLay, J. D. Doyle, and W. T. Thompson, 2009: Regional ensemble forecasts using the ensemble transform technique. Mon. Wea. Rev., 137, 288298, https://doi.org/10.1175/2008MWR2559.1.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., B. Vié, O. Nuissier, and L. Raynaud, 2012: Impact of stochastic physics in a convection-permitting ensemble. Mon. Wea. Rev., 140, 37063721, https://doi.org/10.1175/MWR-D-12-00031.1.

    • Search Google Scholar
    • Export Citation
  • Bröcker, J., and L. A. Smith, 2007: Increasing the reliability or reliability diagrams. Wea. Forecasting, 22, 651661, https://doi.org/10.1175/WAF993.1.

    • Search Google Scholar
    • Export Citation
  • Chen, D., and Coauthors, 2008: New generation of multi-scale NWP system (GRAPES): General scientific design. Chin. Sci. Bull., 53, 34333445, https://doi.org/10.1007/s11434-008-0494-z.

    • Search Google Scholar
    • Export Citation
  • Chen, J., J. Wang, J. Du, Y. Xia, F. Chen, and L. Hongqi, 2020: Forecast bias correction through model integration: A dynamical wholesale approach. Quart. J. Roy. Meteor. Soc., 146, 11491168, https://doi.org/10.1002/qj.3730.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2011: Probabilistic precipitation forecast skill as a function of ensemble size and spatial scale in a convection-allowing ensemble. Mon. Wea. Rev., 139, 14101418, https://doi.org/10.1175/2010MWR3624.1.

    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 30773107, https://doi.org/10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Fleury, A., F. Bouttier, and F. Couvreux, 2022: Process-oriented stochastic perturbations applied to the parameterization of turbulence and shallow convection for ensemble prediction. Quart. J. Roy. Meteor. Soc., 148, 9811000, https://doi.org/10.1002/qj.4242.

    • Search Google Scholar
    • Export Citation
  • Frogner, I.-L., and Coauthors, 2019: HarmonEPS—The HARMONIE ensemble prediction system. Wea. Forecasting, 34, 19091937, https://doi.org/10.1175/WAF-D-19-0030.1.

    • Search Google Scholar
    • Export Citation
  • Fu, P., K. Zhu, K. Zhao, B. Zhou, and M. Xue, 2019: Role of the nocturnal low-level jet in the formation of the morning precipitation peak over the Dabie Mountains. Adv. Atmos. Sci., 36, 1528, https://doi.org/10.1007/s00376-018-8095-5.

    • Search Google Scholar
    • Export Citation
  • Fujita, T., D. J. Stensrud, and D. C. Dowell, 2007: Surface data assimilation using an ensemble Kalman filter approach with initial condition and model physics uncertainties. Mon. Wea. Rev., 135, 18461868, https://doi.org/10.1175/MWR3391.1.

    • Search Google Scholar
    • Export Citation
  • García-Moya, J.-A., A. Callado, P. Escribá, C. Santos, D. Santos-Muñoz, and J. Simarro, 2011: Predictability of short-range forecasting: A multimodel approach. Tellus, 63A, 550563, https://doi.org/10.1111/j.1600-0870.2010.00506.x.

    • Search Google Scholar
    • Export Citation
  • Gebhardt, C., S. E. Theis, M. Paulat, and Z. B. Bouallègue, 2011: Uncertainties in COSMO-DE precipitation forecasts introduced by model perturbations and variation of lateral boundaries. Atmos. Res., 100, 168177, https://doi.org/10.1016/j.atmosres.2010.12.008.

    • Search Google Scholar
    • Export Citation
  • Golding, B. W., and Coauthors, 2014: Forecasting capabilities for the London 2012 Olympics. Bull. Amer. Meteor. Soc., 95, 883896, https://doi.org/10.1175/BAMS-D-13-00102.1.

    • Search Google Scholar
    • Export Citation
  • Griffin, S. M., J. A. Otkin, G. Thompson, M. Frediani, J. Berner, and F. Kong, 2020: Assessing the impact of stochastic perturbations in cloud microphysics using GOES-16 infrared brightness temperature. Mon. Wea. Rev., 148, 31113137, https://doi.org/10.1175/MWR-D-20-0078.1.

    • Search Google Scholar
    • Export Citation
  • Guan, C., and Q. Chen, 2008: Experiments and evaluations of global medium range forecast system of T639L60 (in Chinese). Meteor. Monogr., 34, 11–16.

  • Hagelin, S., J. Son, R. Swinbank, A. McCabe, N. Roberts, and W. Tennant, 2017: The Met Office convective-scale ensemble, MOGREPS-UK. Quart. J. Roy. Meteor. Soc., 143, 28462861, https://doi.org/10.1002/qj.3135.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155167, https://doi.org/10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Harnisch, F., and C. Keil, 2015: Initial conditions for convective-scale ensemble forecasting provided by ensemble data assimilation. Mon. Wea. Rev., 143, 15831600, https://doi.org/10.1175/MWR-D-14-00209.1.

    • Search Google Scholar
    • Export Citation
  • Hersbach, H., 2000: Decomposition of the continuous ranked probability score for ensemble prediction system. Wea. Forecasting, 15, 559570, https://doi.org/10.1175/1520-0434(2000)015<0559:DOTCRP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hodyss, D., and S. J. Majumdar, 2007: The contamination of ‘data impact’ in global models by rapidly growing mesoscale instabilities. Quart. J. Roy. Meteor. Soc., 133, 18651875, https://doi.org/10.1002/qj.157.

    • Search Google Scholar
    • Export Citation
  • Hohenegger, C., and C. Schär, 2007: Predictability and error growth dynamics in cloud-resolving models. J. Atmos. Sci., 64, 44674478, https://doi.org/10.1175/2007JAS2143.1.

    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., and H.-L. Pan, 1996: Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon. Wea. Rev., 124, 23222339, https://doi.org/10.1175/1520-0493(1996)124<2322:NBLVDI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., and J. O. Lim, 2006: The WRF single moment 6-class microphysics scheme (WSM6). J. Korean Meteor. Soc., 42, 129151.

  • Hopson, T. M., 2014: Assessing the ensemble spread–error relationship. Mon. Wea. Rev., 142, 11251142, https://doi.org/10.1175/MWR-D-12-00111.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, A., and X. Wang, 2016: A study of multiscale initial condition perturbation methods for convection-permitting ensemble forecasts. Mon. Wea. Rev., 144, 25792604, https://doi.org/10.1175/MWR-D-16-0056.1.

    • Search Google Scholar
    • Export Citation
  • Keil, C., F. Heinlein, and G. C. Craig, 2014: The convective adjustment time-scale as indicator of predictability of convective precipitation. Quart. J. Roy. Meteor. Soc., 140, 480490, https://doi.org/10.1002/qj.2143.

    • Search Google Scholar
    • Export Citation
  • Keresturi, E., Y. Wang, F. Meier, F. Weidle, C. Wittmann, and A. Atencia, 2019: Improving initial condition perturbations in a convection-permitting ensemble prediction system. Quart. J. Roy. Meteor. Soc., 145, 9931012, https://doi.org/10.1002/qj.3473.

    • Search Google Scholar
    • Export Citation
  • Kühnlein, C., C. Keil, G. C. Craig, and C. Gebhardt, 2014: The impact of downscaled initial condition perturbations on convective-scale ensemble forecasts of precipitation. Quart. J. Roy. Meteor. Soc., 140, 15521562, https://doi.org/10.1002/qj.2238.

    • Search Google Scholar
    • Export Citation
  • Kunii, M., 2014: Mesoscale data assimilation for a local severe rainfall event with the NHM-LETKF system. Wea. Forecasting, 29, 10931105, https://doi.org/10.1175/WAF-D-13-00032.1.

    • Search Google Scholar
    • Export Citation
  • Leoncini, G., R. S. Plant, S. L. Gray, and P. A. Clark, 2010: Perturbation growth at the convective scale for CSIP IOP18. Quart. J. Roy. Meteor. Soc., 136, 653670, https://doi.org/10.1002/qj.587.

    • Search Google Scholar
    • Export Citation
  • Li, J., J. Du, Y. Liu, and J. Y. Xu, 2017: Similarities and differences in the evolution of ensemble spread using various ensemble perturbation methods including topography perturbation. Acta Meteor. Sin., 75, 123146, https://doi.org/10.11676/qxxb2017.011.

    • Search Google Scholar
    • Export Citation
  • Li, J., J. Du, J. Xiong, and M. Wang, 2021: Perturbing topography in a convection-allowing ensemble prediction system for heavy rain forecasts. J. Geophys. Res. Atmos., 126, e2020JD033898, https://doi.org/10.1029/2020JD033898.

    • Search Google Scholar
    • Export Citation
  • Li, X., M. Charron, L. Spacek, and G. Candille, 2008: A regional ensemble prediction system based on moist targeted singular vectors and stochastic parameter perturbations. Mon. Wea. Rev., 136, 443462, https://doi.org/10.1175/2007MWR2109.1.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130141, https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Luo, Y., and Coauthors, 2017: The Southern China Monsoon Rainfall Experiment (SCMREX). Bull. Amer. Meteor. Soc., 98, 9991013, https://doi.org/10.1175/BAMS-D-15-00235.1.

    • Search Google Scholar
    • Export Citation
  • Lupo, K. M., R. D. Torn, and S. C. Yang, 2020: Evaluation of stochastic perturbed parameterization tendencies on convective-permitting ensemble forecasts of heavy rainfall events in New York and Taiwan. Wea. Forecasting, 35, 524, https://doi.org/10.1175/WAF-D-19-0064.1.

    • Search Google Scholar
    • Export Citation
  • Mahrt, L., and M. Ek, 1984: The influence of atmospheric stability on potential evaporation. J. Climate Appl. Meteor., 23, 222234, https://doi.org/10.1175/1520-0450(1984)023<0222:TIOASO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Mason, I., 1982: A model for assessment of weather forecasts. Aust. Meteor. Mag., 30, 291303.

  • McCollor, D., and R. Stull, 2009: Evaluation of probabilistic medium-range temperature forecasts from the North American ensemble forecast system. Wea. Forecasting, 24, 317, https://doi.org/10.1175/2008WAF2222130.1.

    • Search Google Scholar
    • Export Citation
  • Mittermaier, M. P., 2014: A strategy for verifying near-convection-resolving model forecasts at observing sites. Wea. Forecasting, 29, 185204, https://doi.org/10.1175/WAF-D-12-00075.1.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough, 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 66316 682, https://doi.org/10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Mori, P., T. Schwitalla, M. B. Ware, K. Warrach-Sagi, and V. Wulfmeyer, 2021: Downscaling of seasonal ensemble forecasts to the convection-permitting scale over the Horn of Arica using the WRF model. Int. J. Climatol., 41, E1791E1811, https://doi.org/10.1002/joc.6809.

    • Search Google Scholar
    • Export Citation
  • Nishizawa, S., T. Yamaura, and Y. Kajikawa, 2021: Influence of submesoscale topography on daytime precipitation associated with thermally driven local circulations over a mountainous region. J. Atmos. Sci., 78, 25112532, https://doi.org/10.1175/JAS-D-20-0332.1.

    • Search Google Scholar
    • Export Citation
  • Noilhan, J., and S. Planton, 1989: A simple parameterization of land surface processes for meteorological models. Mon. Wea. Rev., 117, 536549, https://doi.org/10.1175/1520-0493(1989)117<0536:ASPOLS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Oizumi, T., K. Saito, J. Ito, T. Kuroda, and L. Duc, 2018: Ultrahigh-resolution numerical weather prediction with a large domain using the K computer: A case study of the Izu Oshima heavy rainfall events on October 15–16, 2013. J. Meteor. Soc. Japan, 96, 2554, https://doi.org/10.2151/jmsj.2018-006.

    • Search Google Scholar
    • Export Citation
  • Pan, Y., Y. Shen, J. Yu, and A. Xiong, 2015: An experimental of high-resolution gauge-radar-satellite combined precipitation retrieval based on the Bayesian merging method. Acta Meteor. Sin., 73, 177186, https://doi.org/10.11676/qxxb2015.010.

    • Search Google Scholar
    • Export Citation
  • Peralta, C., Z. Ben Bouallègue, S. Theis, C. Gebhardt, and M. Buchhold, 2012: Accounting for initial condition uncertainties in COSMO-DE-EPS. J. Geophys. Res., 117, D07108, https://doi.org/10.1029/2011JD016581.

    • Search Google Scholar
    • Export Citation
  • Raynaud, L., and F. Bouttier, 2016: Comparison of initial perturbation methods for ensemble prediction at convective scale. Quart. J. Roy. Meteor. Soc., 142, 854866, https://doi.org/10.1002/qj.2686.

    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, https://doi.org/10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, J. Berner, K. R. Fossell, C. Snyder, J. L. Anderson, and M. L. Weisman, 2014: Representing forecast error in a convective-permitting ensemble system. Mon. Wea. Rev., 142, 45194541, https://doi.org/10.1175/MWR-D-14-00100.1.

    • Search Google Scholar
    • Export Citation
  • Saito, K., H. Seko, M. Kunii, and T. Miyoshi, 2012: Effect of lateral boundary perturbations on the breeding method and the local ensemble transform Kalman filter for mesoscale ensemble prediction. Tellus, 64A, 11594, https://doi.org/10.3402/tellusa.v64i0.11594.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and R. A. Sobash, 2017: Generating probabilistic forecasts from convection-allowing ensembles using neighborhood approaches: A review and recommendations. Mon. Wea. Rev., 145, 33973418, https://doi.org/10.1175/MWR-D-16-0400.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and Coauthors, 2009: Next-day convection-allowing WRF model guidance: A second look at 2-km versus 4-km grid spacing. Mon. Wea. Rev., 137, 33513372, https://doi.org/10.1175/2009MWR2924.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, R. A. Sobash, K. R. Fossell, and M. L. Weisman, 2015: NCAR’s experimental real-time convection-allowing ensemble prediction system. Wea. Forecasting, 30, 16451654, https://doi.org/10.1175/WAF-D-15-0103.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., M. Wong, G. S. Romine, R. A. Sobash, and K. R. Fossell, 2020: Initial conditions for convection-allowing ensembles over the conterminous United States. Mon. Wea. Rev., 148, 26452669, https://doi.org/10.1175/MWR-D-19-0401.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and M. S. Wandishin, 2000: The correspondence ratio in forecast evaluation. Wea. Forecasting, 15, 593602, https://doi.org/10.1175/1520-0434(2000)015<0593:TCRIFE>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Surcel, M., I. Zawadzki, M. K. Yau, M. Xue, and F. Kong, 2017: More on the scale dependence of the predictability of precipitation patterns: Extension to the 2009–13 CAPS spring experiment ensemble forecasts. Mon. Wea. Rev., 145, 36253646, https://doi.org/10.1175/MWR-D-16-0362.1.

    • Search Google Scholar
    • Export Citation
  • Swets, J. A., 1986: Indices of discrimination or diagnostic accuracy: There ROCs and implied models. Psychol. Bull., 99, 100117, https://doi.org/10.1037/0033-2909.99.1.100.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., G. J. Hakim, and C. Snyder, 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, https://doi.org/10.1175/MWR3187.1.

    • Search Google Scholar
    • Export Citation
  • Tracton, M. S., and E. Kalnay, 1993: Operational ensemble prediction at the National Meteorological Center: Practical aspects. Wea. Forecasting, 8, 379398, https://doi.org/10.1175/1520-0434(1993)008<0379:OEPATN>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Vié, B., O. Nuissier, and V. Ducrocq, 2011: Cloud-resolving ensemble simulations of Mediterranean heavy precipitating events: Uncertainty on initial conditions and lateral boundary conditions. Mon. Wea. Rev., 139, 403423, https://doi.org/10.1175/2010MWR3487.1.

    • Search Google Scholar
    • Export Citation
  • Walser, A., D. Luthi, and C. Schär, 2004: Predictability of precipitation in a cloud-resolving model. Mon. Wea. Rev., 132, 560577, https://doi.org/10.1175/1520-0493(2004)132<0560:POPIAC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wang, J., J. Chen, J. Du, Y. Zhang, Y. Xia, and G. Deng, 2018: Sensitivity of ensemble forecast verification to model bias. Mon. Wea. Rev., 146, 781796, https://doi.org/10.1175/MWR-D-17-0223.1.

    • Search Google Scholar
    • Export Citation
  • Wang, J., J. Chen, H. Zhang, H. Tian, and Y. Shi, 2021a: Initial perturbations based on ensemble transform Kalman filter with rescaling method for ensemble forecasting. Wea. Forecasting, 36, 823842, https://doi.org/10.1175/WAF-D-20-0176.1.

    • Search Google Scholar
    • Export Citation
  • Wang, J., and Coauthors, 2021b: Verification of GRAPES-REPS model precipitation forecasts over China during 2019 flood season. Chin. J. Atmos. Sci., 45, 664682, https://doi.org/10.3878/j.issn.1006-9895.2008.20146.

    • Search Google Scholar
    • Export Citation
  • Wang, J., J. Chen, H. Xue, H. Li, and H. Zhang, 2022: The roles of small-scale topographic perturbations in precipitation forecasting using a convection-permitting ensemble prediction system over southern China. Quart. J. Roy. Meteor. Soc., 148, 24682489, https://doi.org/10.1002/qj.4312.

    • Search Google Scholar
    • Export Citation
  • Wastl, C., Y. Wang, A. Atencia, and C. Wittmann, 2019a: Independent perturbations for physics parameterization tendencies in a convection-permitting ensemble (pSPPT). Geosci. Model Dev., 12, 261273, https://doi.org/10.5194/gmd-12-261-2019.

    • Search Google Scholar
    • Export Citation
  • Wastl, C., Y. Wang, A. Atencia, and C. Wittmann, 2019b: A hybrid stochastically perturbed parameterization scheme in a convection-permitting ensemble. Mon. Wea. Rev., 147, 22172230, https://doi.org/10.1175/MWR-D-18-0415.1.

    • Search Google Scholar
    • Export Citation
  • Wastl, C., Y. Wang, A. Atencia, F. Weidle, C. Wittmann, C. Zingerle, and E. Keresturi, 2021: C-LAEF: Convection-permitting limited area-ensemble forecasting system. Quart. J. Roy. Meteor. Soc., 147, 14311451, https://doi.org/10.1002/qj.3986.

    • Search Google Scholar
    • Export Citation
  • Wolff, J. K., M. Harrold, T. Fowler, J. H. Gotway, L. Nance, and B. G. Brown, 2014: Beyond the basics: Evaluating model-based precipitation forecasts using traditional, spatial, and object-based method. Wea. Forecasting, 29, 14511472, https://doi.org/10.1175/WAF-D-13-00135.1.

    • Search Google Scholar
    • Export Citation
  • Xu, Z., J. Chen, Z. Jin, H. Li, and F. Chen, 2020: Assessment of the forecast skill of multiphysics and multistochastic methods within the GRAPES regional ensemble prediction system in the East Asian monsoon region. Wea. Forecasting, 35, 11451171, https://doi.org/10.1175/WAF-D-19-0021.1.

    • Search Google Scholar
    • Export Citation
  • Xu, Z., J. Chen, M. Mu, L. Tao, G. Dai, J. Wang, and Y. Ma, 2022: A stochastic and non-linear representation of model uncertainty in a convective-scale ensemble prediction system. Quart. J. Roy. Meteor. Soc., 148, 25072531, https://doi.org/10.1002/qj.4322.

    • Search Google Scholar
    • Export Citation
  • Yang, Y., H. Yuan, and W. Chen, 2023: Convection-permitting ensemble forecasts of a double-rainbelt event in South China during the pre-summer rainy season. Atmos. Res., 284, 106599, https://doi.org/10.1016/j.atmosres.2022.106599.

    • Search Google Scholar
    • Export Citation
  • Yussouf, N., E. R. Mansell, L. J. Wicker, D. M. Wheatley, and D. J. Stensrud, 2013: The ensemble Kalman filter analyses and forecasts of the 8 May 2003 Oklahoma City tornadic supercell storm using single- and double-moment microphysics schemes. Mon. Wea. Rev., 141, 33883412, https://doi.org/10.1175/MWR-D-12-00237.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., A. M. Odins, and J. W. Nielsen-Gammon, 2006: Mesoscale predictability of an extreme warm-season precipitation event. Wea. Forecasting, 21, 149166, https://doi.org/10.1175/WAF909.1.