• Baldauf, M., A. Seifert, J. Förstner, D. Majewski, M. Rauschendorfer, and T. Reinhardt, 2011: Operational convective-scale numerical weather prediction with the COSMO model: Description and sensitivities. Mon. Wea. Rev., 139, 38873905, doi:10.1175/MWR-D-10-05013.1.

    • Search Google Scholar
    • Export Citation
  • Bierdel, L., P. Friederichs, and S. Bentzien, 2012: Spatial kinetic energy spectra in the convection-permitting limited-area NWP model COSMO-DE. Meteor. Z., 21, 245258, doi:10.1127/0941-2948/2012/0319.

    • Search Google Scholar
    • Export Citation
  • Bright, D. R., and S. L. Mullen, 2002: Short-range ensemble forecasts of precipitation during the southwest monsoon. Wea. Forecasting, 17, 10801100, doi:10.1175/1520-0434(2002)017<1080:SREFOP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., M. Milleer, and T. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125, 28872908, doi:10.1002/qj.49712556006.

    • Search Google Scholar
    • Export Citation
  • Cohen, B. G., and G. C. Craig, 2006: Fluctuations in an equilibrium convective ensemble. Part II: Numerical experiments. J. Atmos. Sci., 63, 20052015, doi:10.1175/JAS3710.1.

    • Search Google Scholar
    • Export Citation
  • Craig, G. C., and B. G. Cohen, 2006: Fluctuations in an equilibrium convective ensemble. Part I: Theoretical formulation. J. Atmos. Sci., 63, 19962004, doi:10.1175/JAS3709.1.

    • Search Google Scholar
    • Export Citation
  • Dierer, S., and Coauthors, 2009: Deficiencies in quantitative precipitation forecasts: Sensitivity studies using the COSMO model. Meteor. Z., 18, 631645, doi:10.1127/0941-2948/2009/0420.

    • Search Google Scholar
    • Export Citation
  • Done, J. M., G. C. Craig, S. L. Gray, P. A. Clark, and M. E. B. Gray, 2006: Mesoscale simulations of organized convection: Importance of convective equilibrium. Quart. J. Roy. Meteor. Soc., 132, 737756, doi:10.1256/qj.04.84.

    • Search Google Scholar
    • Export Citation
  • Groenemeijer, P., and G. Craig, 2012: Ensemble forecasting with a stochastic convective parametrization based on equilibrium statistics. Atmos. Chem. Phys., 12, 45554565, doi:10.5194/acp-12-4555-2012.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., C. Snyder, and R. E. Morssroenemeijer, 2000: A comparison of probabilistic forecasts from bred, singular-vector, and perturbed observation ensembles. Mon. Wea. Rev., 128, 18351851, doi:10.1175/1520-0493(2000)128<1835:ACOPFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and J. M. Fritsch, 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47, 27842802, doi:10.1175/1520-0469(1990)047<2784:AODEPM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Keane, R., and R. Plant, 2012: Large-scale length and time-scales for use with stochastic convective parametrization. Quart. J. Roy. Meteor. Soc., 138, 11501164, doi:10.1002/qj.992.

    • Search Google Scholar
    • Export Citation
  • Keane, R., G. C. Craig, C. Keil, and G. Zängl, 2014: The Plant–Craig stochastic convection scheme in ICON and its scale adaptivity. J. Atmos. Sci., 71, 34043415, doi:10.1175/JAS-D-13-0331.1.

    • Search Google Scholar
    • Export Citation
  • Keil, C., F. Heinlein, and G. C. Craig, 2014: The convective adjustment time-scale as indicator of predictability of convective precipitation. Quart. J. Roy. Meteor. Soc., 140, 480–490, doi:10.1002/qj.2143.

    • Search Google Scholar
    • Export Citation
  • Kober, K., C. Keil, G. C. Craig, and A. Dörnbrack, 2012: Blending a probabilistic nowcasting method with a high-resolution numerical weather prediction ensemble for convective precipitation forecasts. Quart. J. Roy. Meteor. Soc., 138, 755768, doi:10.1002/qj.939.

    • Search Google Scholar
    • Export Citation
  • Kühnlein, C., C. Keil, G. C. Craig, and C. Gebhardt, 2014: The impact of downscaled initial condition perturbations on convective-scale ensemble forecasts of precipitation. Quart. J. Roy. Meteor. Soc., 140, 1552–1562, doi:10.1002/qj.2238.

    • Search Google Scholar
    • Export Citation
  • Lewis, J. M., 2005: Roots of ensemble forecasting. Mon. Wea. Rev., 133, 18651885, doi:10.1175/MWR2949.1.

  • Lin, J., and J. Neelin, 2003: Toward stochastic deep convective parameterization in general circulation models. Geophys. Res. Lett., 30, 1162, doi:10.1029/2002GL016203.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, C. Marsigli, A. Montani, F. Nerozzi, and T. Paccagnella, 2001: A strategy for high-resolution ensemble prediction. Part I: Definition of representative members and global model experiments. Quart. J. Roy. Meteor. Soc., 127, 20692094, doi:10.1002/qj.49712757612.

    • Search Google Scholar
    • Export Citation
  • Plant, R., and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87105, doi:10.1175/2007JAS2263.1.

    • Search Google Scholar
    • Export Citation
  • Roberts, N., and H. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, doi:10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Scheuerer, M., 2014: Probabilistic quantitative precipitation forecasting using ensemble model output statistics. Quart. J. Roy. Meteor. Soc., 140, 10861096, doi:10.1002/qj.2183.

    • Search Google Scholar
    • Export Citation
  • Selz, T., and G. Craig, 2014: Upscale error growth in a high-resolution simulation of a summertime weather event over Europe. Mon. Wea. Rev., 143, 813–827, doi:10.1175/MWR-D-14-00140.1.

    • Search Google Scholar
    • Export Citation
  • Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction systems. Quart. J. Roy. Meteor. Soc., 131, 30793102, doi:10.1256/qj.04.106.

    • Search Google Scholar
    • Export Citation
  • Stephan, K., S. Klink, and C. Schraff, 2008: Assimilation of radar derived rain rates into the convective scale model COSMO-DE at DWD. Quart. J. Roy. Meteor. Soc., 134, 13151326, doi:10.1002/qj.269.

    • Search Google Scholar
    • Export Citation
  • Teixeira, J., and C. A. Reynolds, 2008: Stochastic nature of physical parameterizations in ensemble prediction: A stochastic convection approach. Mon. Wea. Rev., 136, 483496, doi:10.1175/2007MWR1870.1.

    • Search Google Scholar
    • Export Citation
  • Theis, S., A. Hense, and U. Damrath, 2005: Probabilistic precipitation forecasts from a deterministic model: A pragmatic approach. Meteor. Appl., 12, 257268, doi:10.1017/S1350482705001763.

    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wei, M., Z. Toth, R. Wobus, Y. Zhu, C. H. Bishop, and X. Wang, 2006: Ensemble transform Kalman filter-based ensemble perturbations in an operational global prediction system at NCEP. Tellus, 58A, 2844, doi:10.1111/j.1600-0870.2006.00159.x.

    • Search Google Scholar
    • Export Citation
  • Weusthoff, T., F. Ament, M. Arpagaus, and M. Rotach, 2010: Assessing the benefits of convection-permitting models by neighborhood verification: Examples from MAP D-PHASE. Mon. Wea. Rev., 138, 34183433, doi:10.1175/2010MWR3380.1.

    • Search Google Scholar
    • Export Citation
  • Wilks, D., 2006: Probability forecasts of discrete predictands. Statistical Methods in the Atmospheric Sciences, 2nd ed. Academic Press, 282298.

  • Zepeda-Arce, J., E. Foufoula-Georgiou, and K. Droegemeier, 2000: Space-time rainfall organization and its role in validating quantitative precipitation forecasts. J. Geophys. Res., 105, 10 12910 146, doi:10.1029/1999JD901087.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., N. Bei, R. Rotunno, C. Snyder, and C. C. Epifanio, 2007: Mesoscale predictability of moist baroclinic waves: Convection-permitting experiments and multistage error growth dynamics. J. Atmos. Sci., 64, 35793594, doi:10.1175/JAS4028.1.

    • Search Google Scholar
    • Export Citation
  • View in gallery
    Fig. 1.

    Distribution of grid points over precipitation thresholds for different projections of the radar product (Radar-EU, Radar-DE to EU, Radar-DE) on (a) 25 Jun 2008 and (b) 1 Jul 2009. Values in the first bin have to be multiplied by 100, as indicated.

  • View in gallery
    Fig. 2.

    1300 UTC 25 Jun 2008 hourly accumulated rainfall: (a) radar observation, (b) one member of the Tiedtke ensemble, and (c) one member of the PC08 ensemble. 1200 UTC 1 Jul 2009: (d) radar observation, (e) one member of the Tiedtke ensemble, and (f) one member of the PC08 ensemble.

  • View in gallery
    Fig. 3.

    Distribution of accumulation over 24 h of radar observations, average Tiedtke ensemble, and average PC08 ensemble over several precipitation thresholds on (a) 25 Jun 2008 and (b) 1 Jul 2009. Values in the first bin have to be multiplied by 100, as indicated.

  • View in gallery
    Fig. 4.

    Temporal development of domain-averaged precipitation of radar observations (black), 10 Tiedtke ensemble members (yellow dashed line), 10 PC08 ensemble members for the same stochastic realization (R1, blue thin line), and 10 PC08 ensemble members for the same group (IFS11, red dotted line) on (a) 25 Jun 2008 and (b) 1 Jul 2009 with vertical lines indicating the averaging period for Figs. 6 and 7.

  • View in gallery
    Fig. 5.

    Temporal development of FSS for threshold (a),(b) 0.1 and (c),(d) 2.0 mm h−1 and spatial window of 63 km of 10 Tiedtke ensemble members (yellow dashed line), 10 PC08 ensemble members for the same stochastic realization (R1, blue thin line), and 10 PC08 ensemble members for the same group (IFS11, red dotted line) on (a),(c) 25 Jun 2008 and (b),(d) 1 Jul 2009 with vertical lines indicating the averaging period for Figs. 6 and 7.

  • View in gallery
    Fig. 6.

    FSS of hourly precipitation averaged (left) from 0400 to 2000 UTC 25 Jun 2008 and (right) from 0900 to 1900 UTC 1 Jul 2009 for (a),(b) PC08 ensemble (100 members); (c),(d) Tiedtke ensemble (10 members); and (e),(f) the difference PC−TD for several precipitation thresholds and several spatial windows.

  • View in gallery
    Fig. 7.

    FBI with upscaling method of hourly precipitation averaged (left) from 0400 to 2000 UTC 25 Jun 2008, and (right) from 0900 to 1900 UTC 1 Jul 2009 for (a),(b) PC08 ensemble (100 members); (c),(d) Tiedtke ensemble (10 members); and (e),(f) the difference for several precipitation thresholds and several spatial windows.

  • View in gallery
    Fig. 8.

    Standard deviation of precipitating hours average FSS with spatial window of 63 km on (a) 25 Jun 2008 and (b) 1 Jul 2009 of the Tiedtke ensemble (yellow, square), the PC08 ensemble (red, circle), the stochastic subensembles of PC08 (blue, triangle), and its percentage of the total PC08 variability.

  • View in gallery
    Fig. 9.

    Reliability diagrams and refinement distributions of the (a),(b) PC08 and (c),(d) Tiedtke ensemble on (a),(c) 25 Jun 2008 and (b),(d) 1 Jul 2009 for several precipitation thresholds and averaged over all members.

  • View in gallery
    Fig. 10.

    Brier score (dots) and its reliability (triangles) and resolution (squares) components of the PC08 (red) and the Tiedtke ensemble (yellow) on (a) 25 Jun 2008 and (b) 1 Jul 2009 for several precipitation thresholds and averaged over all members.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 375 185 2
PDF Downloads 65 32 0

Examination of a Stochastic and Deterministic Convection Parameterization in the COSMO Model

Kirstin KoberLudwig-Maximilians-Universität, Munich, Germany

Search for other papers by Kirstin Kober in
Current site
Google Scholar
PubMed
Close
,
Annette M. FoersterLudwig-Maximilians-Universität, Munich, Germany

Search for other papers by Annette M. Foerster in
Current site
Google Scholar
PubMed
Close
, and
George C. CraigLudwig-Maximilians-Universität, Munich, Germany

Search for other papers by George C. Craig in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Stochastic parameterizations allow the representation of the small-scale variability of parameterized physical processes. This study investigates whether additional variability introduced by a stochastic convection parameterization leads to improvements in the precipitation forecasts. Forecasts are calculated with two different ensembles: one considering large-scale and convective variability with the stochastic Plant–Craig convection parameterization and one considering only large-scale variability with the standard Tiedtke convection parameterization. The forecast quality of both ensembles is evaluated in comparison with radar observations for two case studies with weak and strong synoptic forcing of convection and measured with neighborhood and probabilistic verification methods. The skill of the ensemble based on the Plant–Craig convection parameterization relative to the ensemble with the Tiedtke parameterization strongly depends on the synoptic situation in which convection occurs. In the weak forcing case, where the convective precipitation is highly intermittent, the ensemble based on the stochastic parameterization is superior, but the scheme produces too much small-scale variability in the strong forcing case. In the future, the degree of stochastic variability could be tuned, and these results show that parameters should be chosen in a regime-dependent manner.

Current affiliation: University of Hawai‘i at Mānoa, Honolulu, Hawaii.

Corresponding author address: Kirstin Kober, Ludwig-Maximilians-Universität München, Theresienstr. 37, Munich 80333, Germany. E-mail: kirstin.kober@lmu.de

This article is included in the Predictability and Dynamics of Weather Systems in the Atlantic-European Sector (PANDOWAE) Special Collection.

Abstract

Stochastic parameterizations allow the representation of the small-scale variability of parameterized physical processes. This study investigates whether additional variability introduced by a stochastic convection parameterization leads to improvements in the precipitation forecasts. Forecasts are calculated with two different ensembles: one considering large-scale and convective variability with the stochastic Plant–Craig convection parameterization and one considering only large-scale variability with the standard Tiedtke convection parameterization. The forecast quality of both ensembles is evaluated in comparison with radar observations for two case studies with weak and strong synoptic forcing of convection and measured with neighborhood and probabilistic verification methods. The skill of the ensemble based on the Plant–Craig convection parameterization relative to the ensemble with the Tiedtke parameterization strongly depends on the synoptic situation in which convection occurs. In the weak forcing case, where the convective precipitation is highly intermittent, the ensemble based on the stochastic parameterization is superior, but the scheme produces too much small-scale variability in the strong forcing case. In the future, the degree of stochastic variability could be tuned, and these results show that parameters should be chosen in a regime-dependent manner.

Current affiliation: University of Hawai‘i at Mānoa, Honolulu, Hawaii.

Corresponding author address: Kirstin Kober, Ludwig-Maximilians-Universität München, Theresienstr. 37, Munich 80333, Germany. E-mail: kirstin.kober@lmu.de

This article is included in the Predictability and Dynamics of Weather Systems in the Atlantic-European Sector (PANDOWAE) Special Collection.

1. Introduction

The skill of numerical forecasts of convective precipitation is limited by several sources of uncertainty that can be minimized, but never completely removed. The initial and the boundary conditions for the model integration have limited accuracy and additionally physical processes have to be approximated and truncated to the model’s grid. Furthermore, the atmosphere is chaotic and the physical nature of convection stochastic. Ensembles of different model integrations and their variability allow the resulting uncertainty to be quantified.

Several approaches to design ensemble systems exist, ranging from methods to account for initial condition uncertainty [e.g., singular or bred vectors discussed in Hamill et al. (2000), or the ensemble transform Kalman filter discussed in Wei et al. (2006)], boundary condition uncertainty (e.g., multimodel approaches), and physics uncertainty (e.g., multiparameterization approaches, or stochastic parameterization) (see review by Lewis 2005). Recent studies show a dependence of the sources of forecast uncertainty on the predominant weather regime (Done et al. 2006; Keil et al. 2014; Kühnlein et al. 2014). Hence, an effective design of an ensemble system should consider this dependence and create the members on the basis of the dominant source of uncertainty.

Stochastic physical parameterizations allow the representation of subgrid-scale variability of parameterized processes that can grow to larger scales. Several approaches of different complexity exist to include perturbations into parameterizations. They include perturbing the input fields before they enter the parameterization (Lin and Neelin 2003; Bright and Mullen 2002), perturbing tunable parameters within the parameterization (Bright and Mullen 2002), perturbing the parameterized tendencies (Buizza et al. 1999; Teixeira and Reynolds 2008), adding additional terms to equations to consider upscale energy transport (Shutts 2005), or an entirely stochastic formulation of a parameterization based on theory (Plant and Craig 2008, hereafter PC08). The application of stochastic parameterizations has shown improved skill (Lin and Neelin 2003) and increased spread in the ensemble forecasts (Buizza et al. 1999).

The stochastic convection parameterization by PC08 is based on the Craig and Cohen (2006) theory and high-resolution simulations of radiative convective equilibrium. The scheme was successfully tested in single-column mode (PC08) and in an idealized setup (Keane and Plant 2012). Groenemeijer and Craig (2012) implemented it in a limited-area model to show that the scheme adds a significant amount of variability to an ensemble. They found as well that this effect depends on the strength of the synoptic forcing. Keane et al. (2014) showed in global aquaplanet simulations that the variability introduced by the scheme adapts correctly to changes in model resolution.

In this study, we investigate if the additional variability introduced by the Plant–Craig parameterization is realistic by systematically comparing the modeled precipitation fields with radar data. Additionally, it is tested if this variability improves the forecasts in comparison to a standard deterministic convection parameterization. This is done for two case studies, representing strong and weak large-scale forcing of convection since different behavior is expected depending on the different nature of convection. The specific questions addressed within this study are as follows:

  1. Is the spatial distribution of precipitation with the stochastic scheme more realistic than for a conventional parameterization (as measured by neighborhood methods like the fractions skill score)?

  2. Are there differences in the probabilistic forecasts (in terms of the Brier score and reliability diagrams)?

As an additional objective, the question of the dependence of the contribution of different sources of uncertainty to the overall variability within an ensemble on the synoptic environment will be addressed since the design of the Plant–Craig ensemble (in the following PC08) allows the distinction between large-scale (in terms of different boundary conditions) and convective variability (in terms of the stochastic convection parameterization).

This article is structured as follows. Section 2 introduces the underlying data and methods applied within the weather forecast model, the Consortium for Small-Scale Modeling (COSMO), and the ensemble setups, the verifying radar observations, and the applied quality measures. The ensembles with conventional and stochastic convection parameterization are evaluated in two case studies representing strong and weak large-scale forcing in section 3 in terms of general properties and deterministic and probabilistic forecast quality measures. The results are discussed in section 4 and final conclusions are drawn.

2. Data and methods

In this study, ensembles of precipitation forecasts calculated with the COSMO model using the stochastic Plant–Craig convection parameterization are compared to forecasts calculated with the same model, but using the deterministic Tiedtke convection parameterization. Two case studies representing weather regimes with different large-scale forcing resulting in a different nature of convection are chosen to compare the different impact of the consideration of stochasticity on the forecasts’ quality. This section describes the underlying forecast model COSMO and the setup of the two different ensembles, the verifying radar dataset, and the verification methods applied to the forecasts to derive their quality.

a. Weather forecast model COSMO

This study is based on the COSMO model (Baldauf et al. 2011) which is run with two different convection parameterizations. COSMO solves the fully compressible equations on an Arakawa C grid that is configured to have a horizontal resolution of 0.0625° (approximately 7 km) and 50 vertical levels by a terrain-following coordinate system (Lorenz grid staggering). Since on a grid with this horizontal resolution convection is a subgrid-scale process, its heating effects have to be considered with a parameterization. Forecasts are calculated using the default deterministic Tiedtke parameterization (Tiedtke 1989) and the stochastic Plant–Craig convection parameterization (PC08). The PC08 parameterization was implemented in the COSMO model by Groenemeijer and Craig (2012) and technical details on the implementation can be found in their study. The PC08 parameterization is based on the closure assumption and the cloud model of the Kain–Fritsch convection parameterization (Kain and Fritsch 1990), and it maintains quasi equilibrium on large scales while creating small-scale variability by drawing plumes from a probability density function computed from the closure mass flux in the convection scheme. As a result, different realizations for the same large-scale conditions provide a representation of the intrinsic stochasticity of convection.

The two convection parameterizations differ not only in the fact that one is deterministic and is one stochastic. Both are mass-flux-based schemes, but differ in two major ways: the closure assumption (Tiedtke parameterization is based on moisture convergence and PC08 is based on CAPE) and the trigger (Tiedtke: temperature threshold; PC08: vertical velocity) (Dierer et al. 2009).

Groenemeijer and Craig (2012) implemented the PC08 scheme in COSMO, version 4.8, and performed ensemble forecasts. An ensemble of 100 members is generated that consists of 10 groups, each containing 10 members. The 10 groups represent different initial and boundary conditions and are defined by the selection of 10 representative members out of the 51-member global European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble with a clustering algorithm (Molteni et al. 2001). Within each group, these members represent different large-scale forcing conditions that translate into different initial and boundary conditions for the COSMO model simulations. In every group the COSMO model is run 10 times with the stochastic PC08 convection parameterization resulting in 10 members. Hence, an ensemble of 100 members is created.

In the following, the entire ensemble is evaluated as well as subensembles containing either only the stochastic variability (members in one group) or the large-scale variability from the initial and boundary conditions (one stochastic realization from each group). Additionally, within every group COSMO is run with the standard Tiedtke convection scheme (Tiedtke 1989) resulting in a 10-member Tiedtke ensemble. In the following, the two ensembles are referred to as PC08 and Tiedtke ensemble. It has to be considered that their number of ensemble members differs. The Tiedtke members could also be interpreted as one member within each group since the initial and boundary conditions are the same as for the subensembles of the PC08 ensemble.

Groenemeijer and Craig (2012) calculated seven case studies with the PC08 ensemble, representing different meteorological situations with varying forcing of convection over central Europe. In this study, the 25 June 2008 case representing strong and the 1 July 2009 case representing weak large-scale forcing are evaluated to analyze the sensitivity of the forecast quality of the ensembles. Here the evaluation domain is limited to Germany since the radar data over their entire model domain [301 × 301 grid points; see Fig. 2 in Groenemeijer and Craig (2012)] is nonhomogenous. The resulting evaluation domain used in this study covers 104 × 68 grid points.

b. Radar data

The forecast quality of the PC08 and the Tiedtke ensemble forecasts of hourly precipitation is evaluated in comparison with precipitation fields derived from radar observations. The German radar composite provided by the Deutscher Wetterdienst (DWD) is combined from quality-controlled measurements of radar reflectivites obtained from 16 Doppler radars. The reflectivities are available every 5 min with 1-km horizontal resolution and converted to precipitation rates with an empirical Z–R relationship. Hourly precipitation is accumulated from the observations with 5-min frequency and projected on the COSMO model grid (Stephan et al. 2008).

Since in this study the small-scale structure of precipitation fields is evaluated, the projection method and the remaining variability in the fields are crucial. DWD provides versions of the radar composite projected linearly on the model grids of COSMO-DE (operational COSMO version with 2.8-km resolution; in the following Radar-DE) and COSMO-EU (operational COSMO version with 7-km resolution; in the following Radar-EU). Since this procedure introduces significant smoothing we also consider an alternative projection of the higher-resolution product Radar-DE on the 7-km COSMO grid (in the following Radar-DE to EU) by coarse graining, which means that for every COSMO-EU grid point an average value is calculated from the Radar-DE observations located within a square of 7 km × 7 km around the respective COSMO-EU grid point. Since the model grid is rectangular, this number of grid points being averaged varies from 2 to 6 grid points. Table 1 gives an overview of the projections, their resolution, and the projection method.

Table 1.

Available projections of the German radar composite.

Table 1.

Figure 1 compares the distribution of the number of grid points in different precipitation threshold categories of the three projection methods for all available observations on 25 June 2008 and 1 July 2009 separately. On both days, Radar-EU has the largest number of precipitating grid points (smallest number in 0.0 mm h−1 category). More values in low and less in high thresholds are found in comparison to Radar-DE and Radar-DE to EU. In contrast, the Radar-DE has the smallest number of precipitating grid points and more values in categories with high thresholds (more than 5 mm h−1) and less in low thresholds (0.1–2.0 mm h−1) indicating that the small-scale structure of the precipitation fields is represented better. The distribution of the alternative projection (Radar-DE to EU) lies between the DWD products: Radar-DE and Radar-EU. It contains less smoothing and is thus closer to the higher-resolution product and hence should provide a better representation of the observed variability in intensity. Therefore, it will be used in the quality evaluation.

Fig. 1.
Fig. 1.

Distribution of grid points over precipitation thresholds for different projections of the radar product (Radar-EU, Radar-DE to EU, Radar-DE) on (a) 25 Jun 2008 and (b) 1 Jul 2009. Values in the first bin have to be multiplied by 100, as indicated.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

c. Verification methods

This study assesses several aspects of the quality of forecasts calculated with the two different ensemble setups. The deterministic skill of the precipitation forecasts is evaluated with two neighborhood verification methods: upscaling (UP) (Zepeda-Arce et al. 2000) and the fractions skill score (FSS) (Roberts and Lean 2008). These methods employ the areal mean and the frequency of exceeding a threshold to analyze important properties of precipitation fields. Neighborhood methods avoid a pointwise comparison, which suffers from the double-penalty problem, by calculating forecast quality within spatial windows (or neighborhoods) of different sizes around a point of interest (Weusthoff et al. 2010). Following Weusthoff et al. (2010), we use spatial windows, that is, squares with increasing side length from 1, 3, 5, 9, 15, 27, to 45 grid points, corresponding to 7, 21, 35, 63, 105, 189, and 315 km with the model resolution of this study. The precipitation thresholds are varied from 0.1, 0.2, 0.5, 1, 2, 5, to 10 mm h−1.

The UP method determines the mean within each spatial window and calculates categorical scores from contingency tables (Wilks 2006). In this study, the frequency bias (FBI), defined as
e1
is calculated to measure how the forecasted frequency of yes events corresponds to the observed frequency of yes events. The variable a stands for hits, b is for false alarms, and c is for misses. Values range between 0 and infinity, and a perfect forecast would have a FBI of 1.
The FSS method determines fractions of observed and forecasted grid points [ and , respectively; i and j are the grid points in the x and y direction, respectively] exceeding a specific precipitation threshold and calculates a skill score based on the mean squared error (MSE):
e2
with n denoting the different spatial windows. is defined as
e3
for each spatial window n and a reference :
e4
which is the largest possible MSE given observed and forecasted frequencies and (Roberts and Lean 2008). A perfect forecast would result in a FSS of 1, in contrast to bad forecasts with values close to 0.

To not mask the information by including times with little precipitation, the average skill is evaluated in terms of the mean over only the precipitating hours. Precipitating hours are defined as times when observed domain-averaged precipitation exceeds 0.02 mm h−1 (indicated in Fig. 4).

To quantify the influence of the different sources of uncertainty considered in the PC08 ensemble, the variability of the different subensembles in terms of the standard deviations of the average FSS over the precipitating times is compared, similar to Groenemeijer and Craig (2012). The total standard deviation of the entire 100-member PC08 ensemble and the internal standard deviation resulting from the 10 stochastic realizations of the PC08 parameterization within each group are distinguished. In this way the contribution to precipitation variability of the stochastic parameterization and the large scales can be compared for each spatial window and precipitation threshold.

The probabilistic information of the ensembles is evaluated with reliability diagrams and the Brier score (BS) that quantifies the mean squared error between the probabilistic forecasts and the binary observation for all forecast and event pairs k:
e5
and its decomposition in reliability, resolution, and uncertainty. These terms are defined in Wilks (2006) with the categorized probability forecasts , the observed frequencies within each category i, , and the mean observed frequencies :
e6
A perfect forecast would result in a Brier score of 0, a reliability of 0, and a resolution of 1.

For this evaluation it is important to consider how the probabilistic information is extracted from the ensembles. The fraction method that derives probabilities by calculating at every grid point the fraction of members exceeding a threshold (e.g., Kober et al. 2012) is not suitable for two reasons. First, the number of ensemble members is different in the PC08 and the Tiedtke ensembles. Second, the stochastic PC08 ensemble with its variability on small scales is not expected to show skill in gridpointwise evaluations. Hence, the neighborhood method (Theis et al. 2005) is applied to derive exceedance probabilities for every ensemble member. Every forecasted hourly precipitation field of each individual ensemble member is converted to probabilities by calculating the fraction of grid points above a threshold (thresholds: 0.1, 0.2, 0.5, 1, 2, 5, and 10 mm h−1) in a predefined neighborhood around the point of interest. The neighborhood is taken to be a square with a 63-km side length that corresponds to a spatial window of 9 grid points in the neighborhood verification. This value is chosen following Kober et al. (2012), and it is large enough that the results are not sensitive to the exact value.

3. Evaluation of PC08 and Tiedtke ensemble precipitation forecasts

a. General properties

Figure 2 displays radar observations (the projection Radar-DE to EU) and precipitation forecasts of selected members of the Tiedtke and the PC08 ensembles for the two forcing situations. Since the general properties of the forecasted fields are similar within both ensembles, for clarity only one (randomly chosen) member of each ensemble is displayed. As discussed by Groenemeijer and Craig (2012) the CAPE closure used by PC08 implies that precipitation will occur within an envelope determined by the large-scale forcing, similar to conventional parameterization schemes. However, the rainfall will be broken up into small patchy regions by the stochastic selection of plumes within each grid box. The degree of “patchiness” is determined by a parameter in the scheme that sets the mean mass flux per cloud. This was originally set based on high-resolution simulations of tropical convection, and further modified by Groenemeijer and Craig (2012). It is not certain that a single value is appropriate for differing types of midlatitude convection.

Fig. 2.
Fig. 2.

1300 UTC 25 Jun 2008 hourly accumulated rainfall: (a) radar observation, (b) one member of the Tiedtke ensemble, and (c) one member of the PC08 ensemble. 1200 UTC 1 Jul 2009: (d) radar observation, (e) one member of the Tiedtke ensemble, and (f) one member of the PC08 ensemble.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

Groenemeijer and Craig (2012) noted that the 25 June 2008 case with a strong squall line passing central Europe featured significant large-scale forcing for the convection. The observed hourly precipitation accumulation at 1300 UTC shows organized precipitation within a frontal band followed by postfrontal precipitation (Fig. 2a). Both convection parameterizations (Figs. 2b,c) capture the overall structure with precipitation over central Germany and the absence of rain over northern and southeastern Germany although the exact structure (location and intensity) of the frontal band is missed. They differ in general properties: the forecast with the Tiedtke parameterization is more widespread and lacks the intense maxima found in the observations (Fig. 2b). In contrast, the PC08 parameterization produces many intense and isolated cells rather than the larger, more coherent structures that are observed. A few smoother patches of low-intensity precipitation resulting from the grid-scale scheme are also present in the north and east of the domain. The size of these areas is limited by the effective resolution of the model dynamics (Bierdel et al. 2012), which cannot produce the intense local precipitation maxima that are generated by parameterized updrafts within the stochastic scheme.

The 1 July 2009 case is characterized by intense and short-lived small-scale precipitation cells resulting from diurnally driven convection in weak large-scale forcing (Fig. 2d). The Tiedtke parameterization (Fig. 2e) forecasts a smooth and widespread precipitation field. Hence, the intensity of the isolated cells is missed, and the spatial coverage is overestimated. In contrast, the small, intense precipitation features produced by the PC08 parameterization (Fig. 2f) resemble the observations. Since the location of the individual cells is determined randomly in PC08, their positions are not correctly forecasted.

To investigate if the findings from the snapshots (Fig. 2) are representative of all times over both days, the distribution of the number of grid points with rainfall amounts in several intervals is calculated for the radar observation (Radar-DE to EU) and the average values of the Tiedtke and the PC08 ensembles (Fig. 3). The behavior on both days is qualitatively similar, although the quantitative values differ. The category of nonprecipitating grid points (threshold 0.0 mm h−1) shows that Tiedtke forecasts overestimate and PC08 forecasts underestimate the areal coverage of precipitation. Within the precipitating grid points the deviation of the ensembles from the radar observation varies with threshold. For low thresholds (0.1–1.0 mm h−1) the Tiedtke forecasts overestimate the frequency of occurrence whereas the PC08 forecasts underestimate. The number of values for the thresholds 2.0 and 5.0 mm h−1 are underestimated by both parameterizations. For the highest threshold (10 mm h−1), PC08 overestimates whereas Tiedtke produces almost no precipitation of this intensity. For all thresholds except the lowest precipitating threshold (0.1–0.2 mm h−1) on 25 June 2008, differences are larger between Tiedtke and Radar-DE to EU than between PC08 and Radar-DE to EU.

Fig. 3.
Fig. 3.

Distribution of accumulation over 24 h of radar observations, average Tiedtke ensemble, and average PC08 ensemble over several precipitation thresholds on (a) 25 Jun 2008 and (b) 1 Jul 2009. Values in the first bin have to be multiplied by 100, as indicated.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

Having noted differences in the spatial distribution of precipitation and the distribution of rainfall amounts, it is interesting to compare the domain-averaged precipitation produced by the two parameterizations to see differences in overall biases and timings. Time series are displayed in Fig. 4 for the radar observation (Radar), all 10 Tiedtke ensemble members, and two subensembles of the PC08 ensembles [10 members, respectively, the same stochastic realizations (R1) and the same group of initial and boundary conditions (IFS11)] for the two investigated cases. On 25 June 2008 (Fig. 4a) the observed average is over 0.02 mm h−1 from 0400 to 2000 UTC with a maximum value of 0.5 mm h−1 around 1400 UTC. The Tiedtke ensemble members reach smaller maxima earlier in the day and fail to forecast the fast decrease observed between 1500 and 1700 UTC as the precipitation exits the domain. In contrast, some of the PC08-R1 subensemble members forecast the onset of precipitation correctly and reach maxima in the range of the observation but slightly later. The fast decrease is captured by some of the selected members, but with delay. The variability within the PC08-R1 subensembles is larger than in the Tiedtke ensemble, but in the afternoon all members overestimate in comparison to the observation. The spread within the subensemble PC08-R1 representing variability of initial and boundary conditions is larger than that produced by the subensemble of stochastic realizations in PC08-IFS11.

Fig. 4.
Fig. 4.

Temporal development of domain-averaged precipitation of radar observations (black), 10 Tiedtke ensemble members (yellow dashed line), 10 PC08 ensemble members for the same stochastic realization (R1, blue thin line), and 10 PC08 ensemble members for the same group (IFS11, red dotted line) on (a) 25 Jun 2008 and (b) 1 Jul 2009 with vertical lines indicating the averaging period for Figs. 6 and 7.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

On 1 July 2009 (Fig. 4b) the observed domain-averaged precipitation exceeds 0.02 mm h−1 between 0900 and 1900 UTC with a maximum of 0.3 mm h−1 at 1300 UTC. The Tiedtke forecasts have a lower maximum that is reached 3 hours earlier whereas the PC08 solutions have maxima comparable to the observations but a faster decrease resulting in underestimation. Among the displayed PC08-R1 members more variability is found and, hence, their behavior is less smooth than the Tiedtke ensemble members’ behavior.

In general the total precipitation produced by the two schemes is comparable, with maximum amounts close to observations, with the Tiedtke scheme tending to rain earlier. The Tiedtke forecasts underestimate the domain averaged precipitation and the comparison with Fig. 3 indicates that this is due to the underrepresentation of high intensity values. Except for the early part of the day on 25 June, the rainfall curves cluster into two groups associated with each parameterization. The different closure assumptions (moisture convergence for Tiedtke versus CAPE for PC08) may contribute significantly to this difference. The comparison between the two days shows that both ensembles have more variability on 25 June 2008, but in the weak forcing case the two subensembles from the 100-member PC08 ensemble indicate that the contribution from the stochastic subensemble to the overall variability is more important.

b. Deterministic quality measures

The most important distinction between the Tiedtke and PC08 forecasts seen in section 3a is the extent to which the precipitation is concentrated into local intense maxima or spread more uniformly. Here the impact of these structures on deterministic forecast skill will be assessed. Since the precise location of the precipitation features is not expected to be well predicted with PC08, the neighborhood verification methods FSS (Figs. 5 and 6) and upscaling with the FBI (Fig. 7) are used.

Fig. 5.
Fig. 5.

Temporal development of FSS for threshold (a),(b) 0.1 and (c),(d) 2.0 mm h−1 and spatial window of 63 km of 10 Tiedtke ensemble members (yellow dashed line), 10 PC08 ensemble members for the same stochastic realization (R1, blue thin line), and 10 PC08 ensemble members for the same group (IFS11, red dotted line) on (a),(c) 25 Jun 2008 and (b),(d) 1 Jul 2009 with vertical lines indicating the averaging period for Figs. 6 and 7.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

Fig. 6.
Fig. 6.

FSS of hourly precipitation averaged (left) from 0400 to 2000 UTC 25 Jun 2008 and (right) from 0900 to 1900 UTC 1 Jul 2009 for (a),(b) PC08 ensemble (100 members); (c),(d) Tiedtke ensemble (10 members); and (e),(f) the difference PC−TD for several precipitation thresholds and several spatial windows.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

Fig. 7.
Fig. 7.

FBI with upscaling method of hourly precipitation averaged (left) from 0400 to 2000 UTC 25 Jun 2008, and (right) from 0900 to 1900 UTC 1 Jul 2009 for (a),(b) PC08 ensemble (100 members); (c),(d) Tiedtke ensemble (10 members); and (e),(f) the difference for several precipitation thresholds and several spatial windows.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

Figure 5 assesses the temporal development of the FSS and its variability among the Tiedtke ensemble and two PC08 subensembles representing the variability due to stochasticity (IFS11) and due to boundary conditions (R1) for two precipitation thresholds in a fixed spatial window of 63 km. On 25 June 2008 for a low threshold of 0.1 mm h−1 (Fig. 5a), the variability within all ensembles decreases over the day and is largest in the morning before the front is in the evaluation domain. The largest variability is found in the FSS of the PC08-R1 ensemble members and smallest is found in PC08-IFS11. This is generally also found with a higher threshold of 2.0 mm h−1, but here, the PC08 members have higher skill after 1500 UTC (Fig. 5c). On 1 July 2009, there is less variability within each ensemble itself but larger variability between the ensembles. Both PC08 subensembles have more skill in the low threshold in FSS during the onset of convection in the first half of the day whereas the Tiedtke ensemble is superior during the time where most low threshold events occur (Fig. 5b). Skill in high threshold values is seen with the PC08 subensembles during the precipitating hours (cf. Fig. 4) with small differences between the two subensembles. Tiedtke is considerably poorer and only comparable to PC08 over a few hours in the morning convection initiation period, and later on, the FSS drops to zero (Fig. 5d).

To summarize the results, tables are shown for FSS and FBI, averaged over all members within each ensemble and over the precipitating times (indicated by vertical lines in Figs. 4 and 5), for the two case studies separately. The forecast skill is evaluated for several thresholds from 0.1 to 10 mm h−1 and spatial windows of sizes from 7 km (1 grid point) to 315 km (45 grid points). The difference between the ensembles (calculation differs with score) highlights the more skillful ensemble.

On 25 June 2008 (Figs. 6a,c,e), the PC08 ensemble (Fig. 6a) and the Tiedtke ensemble (Fig. 6c) show similar patterns in the FSS. Skill increases with increasing window size and decreases with increasing threshold. The highest skill is reached with the largest window (315 km) and the lowest threshold (0.1 mm h−1). The difference between both ensembles (Fig. 6e) shows that the Tiedtke ensemble has higher skill for all spatial windows and thresholds (except one value). Differences are small for small windows and thresholds of 1 mm h−1 or greater.

In the case study representing weak large-scale forcing on 1 July 2009 (Figs. 6b,d,f), the PC08 ensemble again has high skill for low thresholds and large windows, but now also for large windows at high thresholds (Fig. 6b). This is not found with the Tiedtke ensemble (Fig. 6d), and as a result the difference (Fig. 6f) shows much higher skill with PC08 for large windows and high thresholds. For small windows and low thresholds, the Tiedtke ensemble has marginally higher skill.

The comparison of the two case studies shows that the PC08 ensemble has higher forecast skill in the weak forcing situation, especially in the averages over large windows and high thresholds indicating high-impact weather. In contrast, in the strong forcing situation, the Tiedtke ensemble has higher skill over all spatial windows and in particular for low precipitation thresholds, although differences are mostly small. The results are consistent with the histograms in Fig. 3, which show that the PC08 scheme produces too few grid points with low precipitation amounts, while the Tiedtke scheme almost completely fails to produce intense maxima.

Differences in the domain-averaged precipitation between the two schemes were shown in Fig. 4. Evaluation of the upscaled FBI allows these differences to be associated with particular scales and thresholds in the precipitation field, which can give insight into the mechanisms responsible. In the strong forcing situation (Figs. 7a,c,e), the PC08 ensemble (Fig. 7a) shows varying behavior over the different thresholds and scales. At the grid scale (7 km) over all thresholds (except 10 mm h−1) the observed precipitation is underestimated by the forecasts. In contrast, for spatial windows between 21 and 189 km and low thresholds (0.1–0.5 mm h−1) rain is overestimated. This is even greater with the largest threshold of 10 mm h−1. In general, skill (closeness to the perfect value of one) is lowest for small window sizes and high thresholds and has maximum values at moderate thresholds (around 1 mm h−1) and spatial windows of medium size (around 35 km). High thresholds could not be evaluated with large spatial windows since these values did not occur. The Tiedtke ensemble (Fig. 7c) overestimates precipitation in low thresholds (up to 1 mm h−1) but for the lowest two thresholds skill increases with window size. Thresholds larger than 1 mm h−1 lead to an underestimation of the precipitation amount. The comparison of the differences of both ensembles to the optimal value of one (Fig. 7e) shows that the PC08 is superior to the Tiedtke ensemble in estimating the correct precipitation amount, especially for high thresholds (although not for 10 mm h−1), but also for lower thresholds with increasing window sizes, although differences are small for intermediate thresholds of 0.5 mm h−1. The generally good performance of both schemes for large window sizes and low thresholds is presumably related to the closure assumption, which attempts to constrain the mean rainfall in terms of the large-scale environment, whereas the FBI for small windows and higher thresholds is strongly influenced by the local structure of the precipitation field.

In the weak forcing situation on 1 July 2009 (Figs. 7b,d,f), the PC08 ensemble (Fig. 7b) underestimates the precipitation amount except for the highest thresholds (5 and 10 mm h−1). Again, skill increases with increasing size of spatial windows. The Tiedtke ensemble (Fig. 7d) overestimates precipitation for low thresholds and small spatial windows and underestimates the other low thresholds and large windows. Forecasts for thresholds larger than 1.0 mm h−1 hardly have skill or are not issued at all. The comparison of the skill of the two ensembles (Fig. 7f) shows that in almost all categories the PC08 ensemble is superior or the differences are negligibly small. In contrast to the stronger forced case (Fig. 7e), differences between the two ensembles are larger, consistent with the more realistic prediction of large local maxima by PC08.

Groenemeijer and Craig (2012) showed that the contribution of the stochastic convection scheme to variability of precipitation was strongly case dependent, but did not consider different precipitation thresholds. Figure 8 shows the standard deviation of the FSS with the 63-km spatial window of the Tiedtke and the PC08 ensemble in the two different forcing situations (note that Fig. 6 showed the mean values over several spatial windows and thresholds). Additionally, the standard deviation averaged over the 10 subensembles of the PC08 ensemble with same boundary conditions is displayed. The contribution of this internal variability to the entire variability in the PC08 ensemble is displayed in terms of percentage to show the importance of multiple realizations of the stochastic convection field.

Fig. 8.
Fig. 8.

Standard deviation of precipitating hours average FSS with spatial window of 63 km on (a) 25 Jun 2008 and (b) 1 Jul 2009 of the Tiedtke ensemble (yellow, square), the PC08 ensemble (red, circle), the stochastic subensembles of PC08 (blue, triangle), and its percentage of the total PC08 variability.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

On 25 June 2008, except for the lowest thresholds, the standard deviation of the Tiedtke ensemble is larger than the standard deviation of the entire PC08 ensemble (Fig. 8a). For both, it decreases with increasing threshold. The contribution of the stochastic PC08 subensembles to the total variability increases with threshold, but never reaches 40%. In the weak forcing situation on 1 July 2009 (Fig. 8b), the amplitude of the standard deviation is larger for the PC08 ensemble than for Tiedtke (that has hardly any spread for high thresholds) and increases with threshold. The internal variability from the stochastic scheme is in the range of 80%. The comparison of the two forcing situations shows an order of magnitude difference in amplitude of variability. As expected, the relative contribution of the stochastic variability is more than twice as large in the weak forcing situation. In both cases it increases with threshold although the increase is faster in the forced situation (Fig. 8a).

c. Probabilistic quality measures

The probabilistic information of the ensemble forecasts is evaluated with reliability diagrams and quantified with the Brier score and its decomposition. To consider the different ensemble sizes and the small-scale variability in the PC08 ensemble, the neighborhood method (Theis et al. 2005) is applied to each ensemble member. The size of the neighborhood is fixed at 63 km (corresponding to 9 grid points).

For the case with stronger forcing on 25 June 2008 (Figs. 9a,c), the forecast probability categories for different precipitation thresholds derived from the PC08 ensemble (Fig. 9a) are populated such that the highest categories (starting with 80%) are never forecasted, and the maximum population decreases with increasing threshold. In particular, probabilities of up to 70% are issued for the lowest threshold (0.1 mm h−1), while the maximum probability for the highest threshold (here 2 mm h−1) never exceeds 30%. As a result, sharpness is low for all precipitation thresholds. Reliability and resolution vary for the different probability categories and precipitation thresholds as well. Low and moderate thresholds (up to 0.5 mm h−1) show some reliability and resolution in terms of the lower distance to the diagonal and steeper curves. The refinement distribution for the Tiedtke ensemble (Fig. 9c) also shows a decreasing population of high forecast categories with increasing precipitation threshold, but in general the probabilities are higher than with the PC08 ensemble. The highest forecast category of 100% is populated for the 0.1 and 0.2 mm h−1 threshold. Nevertheless, the sharpness is again poor since the population decreases steadily with increasing threshold. The comparison of the probabilistic information as measured with the Brier score (Fig. 10a) of the PC08 and the Tiedtke ensemble shows that the Tiedtke ensemble is superior over all thresholds, especially in terms of resolution, but differences become smaller for high thresholds (since these are rare events). The resolution is higher in the Tiedtke ensemble for all thresholds as well, but concerning reliability, Tiedtke is only superior for higher thresholds.

Fig. 9.
Fig. 9.

Reliability diagrams and refinement distributions of the (a),(b) PC08 and (c),(d) Tiedtke ensemble on (a),(c) 25 Jun 2008 and (b),(d) 1 Jul 2009 for several precipitation thresholds and averaged over all members.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

Fig. 10.
Fig. 10.

Brier score (dots) and its reliability (triangles) and resolution (squares) components of the PC08 (red) and the Tiedtke ensemble (yellow) on (a) 25 Jun 2008 and (b) 1 Jul 2009 for several precipitation thresholds and averaged over all members.

Citation: Monthly Weather Review 143, 10; 10.1175/MWR-D-15-0012.1

In the weak forcing situation on 1 July 2009 (Fig. 9b) the refinement factorization of the PC08 ensemble shows that only small probabilities are issued, and the higher the precipitation threshold, the smaller the forecasted maximum probabilities. But even for the lowest precipitation thresholds, the forecasted probabilities never exceed 50%. The shape of the calibration function indicates reliability and resolution for low precipitation thresholds but hardly any for higher probabilities. The reliability diagram of the Tiedtke ensemble (Fig. 9d) shows that the forecasts hardly have any reliability and resolution. Although high forecast categories are populated, the forecasts lack sharpness. The comparison of both ensembles indicates higher skill in the probabilistic information of the PC08 ensemble. This is confirmed by the Brier score and its decomposition (Fig. 10b), which has smaller values in the Brier score and the reliability component. The overall resolution is similarly small in both ensembles; although the reliability diagrams suggest the PC08 ensemble is superior for low thresholds, this is offset by the complete absence of forecasts of higher probabilities. As indicated in the reliability diagrams, the differences decrease with increasing precipitation threshold. In general, the results for both cases are consistent with the properties of the individual precipitation forecasts noted previously.

4. Conclusions

In this study, the performance of the stochastic convective parameterization of Plant and Craig (2008, hereafter PC08) is evaluated in comparison to radar data. Groenemeijer and Craig (2012) showed that the stochastic scheme contributes significant variability in a mesoscale ensemble prediction system, but that the contribution varies strongly between different weather regimes. Here we investigated whether this additional variability produces a measurable improvement in various metrics of forecast quality. Two cases selected from the study of Groenemeijer and Craig (2012) were considered to illustrate the performance under strong and weak large-scale forcing of convection. As a reference, the deterministic Tiedtke parameterization used operationally in the COSMO model was also evaluated.

The PC08 parameterization uses a CAPE closure to determine an ensemble mean convective mass flux for a spectrum of convective plumes, but rather than applying this mass flux at each grid point, a random selection of convective plumes is drawn, following the theoretical distribution of Craig and Cohen (2006). As a result, the precipitation patterns produced by this scheme have a patchy character, with large variability on the grid scale, but follow a similar envelope to those produced by the deterministic Tiedtke scheme. Histograms of gridpoint precipitation intensity showed that the Tiedtke scheme overpredicts low rainfall rates, but completely misses the higher intensities. In contrast the stochastic scheme is able to produce realistic frequencies of intense precipitation, while underpredicting the low intensities, at least for the weak forcing case. For the strong forcing case, PC08 overestimates the occurrence of high precipitation rates, indicating that the scheme is too “patchy.”

The different characters of the stochastic and deterministic parameterizations lead to differences in the various quantitative skill scores. The ability of PC08 to produce local intense regions of precipitation produces substantial improvements in the upscaled frequency bias index (FBI) for high precipitation thresholds over the deterministic Tiedtke scheme. The effect on the fractions skill score (FSS) is more subtle and shows differences between the two cases. For the weak forcing case, the stochastic scheme gives major improvements for high thresholds and large spatial scales but, as expected, there is no improvement at small scales since the location of the intense precipitation patches is chosen randomly. Since the precipitation pattern produced by PC08 is too patchy in the strongly forced case, no improvement in FSS is seen; indeed the deterministic scheme is superior, although the character of the errors is quite different.

The probabilistic information of the ensemble forecasts in terms of reliability and resolution was found to be relatively small for both case studies and parameterization schemes. The reliability diagrams are however very different, and there is a tendency for the stochastic scheme to produce comparatively better results for forecasts of high thresholds, while the deterministic scheme is superior at low thresholds. This can again be attributed to the patchy versus smooth precipitation patterns produced by the respective schemes. A potential caveat is that the application of the neighborhood method in the evaluation procedure influences the probabilistic properties (e.g., sharpness of the forecasts). Additionally, the results of this study are based on two cases and their significance will in the future be tested on a larger data basis.

It should be noted that not all differences in quality between the two convection schemes are related to the stochastic variability. The mass flux closure and entraining plume model used by PC08 are also different from those of the Tiedtke scheme, so that even in a model with a very coarse resolution where the stochastic variability is small, the schemes may produce different results. The most visible effect is likely to be in the timing of the precipitation, which is controlled by the closure, since random initiation of convective plumes will not begin until the equilibrium mass flux is computed to be greater than zero. Differences in the plume model will affect the vertical redistribution of moisture. It does not appear that this had a strong influence on the present results focusing on precipitation skill. However, the moisture differences would be expected to accumulate over time and would become more important in longer model integrations.

Overall, the stochastic PC08 scheme addresses the problem of small-scale variability of precipitation successfully, but only in certain circumstances are the quantitative scores significantly superior to the operational scheme. There is however room for improvement, since the stochastic variability has not been “tuned” in any way. The dominant parameter in determining the extent to which precipitation is broken up into local intense patches is the mean mass flux per cloud. This was originally chosen by PC08 based on the simulations of tropical oceanic convection of Cohen and Craig (2006), and modified by Groenemeijer and Craig (2012) to improve the domain-averaged precipitation rates. The comparison with observations shown here indicates that the mean mass flux per cloud is too large, producing too much variability. In addition, the visual impression from the radar data is that the precipitation features tend to be larger than single grid points in the COSMO 7-km model, and the results would be improved by applying the output of the parameterization over several adjacent grid points, or allowing the plumes to be advected in space over their lifetimes. It is very striking, however, that the scale of the convective features in the observations is very different between the two case studies, with a much smoother character under strong forcing. The degree of stochastic variability should, therefore, depend on environmental factors. This is an important topic for future research.

The impact of the stochastic parameterization was evaluated here in terms of precipitation, since this is the field that is most directly affected. The effect on measures such as the fractions skill score are easy to relate to the assumptions of the scheme and can be expected to apply generally, although only two case studies were examined here. It would also be of interest to assess the effects on forecasts of other quantities such as wind or temperature, but this is more difficult. The effects would likely be smaller than the changes in the precipitation field and would probably require significantly more data to distinguish from the background variability associated with other mechanisms. This requirement would be partially mitigated if there were a clear hypothesis regarding what form the effects of the stochastic perturbations would take. One possibility would be the upscale error growth experiments of Selz and Craig (2014), which quantify the changes to disturbance kinetic energy on different scales following the three-stage error growth model of Zhang et al. (2007).

A final question that should be addressed in evaluating a physically based stochastic parameterization is whether the same improvements in the probabilistic forecast could be achieved in a simpler way, for example using a perturbed tendency scheme such as that of Buizza et al. (1999), or even by postprocessing the output of a deterministic forecast. Since the results of the present study are directly related to the spatial variability introduced by the PC08 scheme, it is entirely possible that much of the gain could be obtained by postprocessing. The results would depend on the methods used, but quite sophisticated techniques based on spatial statistics are being developed (Scheuerer 2014). On the other hand, if there are significant dynamical feedbacks from the convective variability onto the larger-scale flow, there is potential for a physically based parameterization to improve an ensemble forecast in ways that postprocessing could not. This too is an important topic for future research.

Acknowledgments

We gratefully acknowledge the Deutscher Wetterdienst (DWD) for providing the European radar composite. Annette Foerster was supported by the German Research Foundation (DFG) as part of the research unit Predictability and Dynamics of Weather Systems in the Atlantic-European Sector (PANDOWAE). We thank the three anonymous reviewers for their constructive and helpful comments that improved the paper significantly.

REFERENCES

  • Baldauf, M., A. Seifert, J. Förstner, D. Majewski, M. Rauschendorfer, and T. Reinhardt, 2011: Operational convective-scale numerical weather prediction with the COSMO model: Description and sensitivities. Mon. Wea. Rev., 139, 38873905, doi:10.1175/MWR-D-10-05013.1.

    • Search Google Scholar
    • Export Citation
  • Bierdel, L., P. Friederichs, and S. Bentzien, 2012: Spatial kinetic energy spectra in the convection-permitting limited-area NWP model COSMO-DE. Meteor. Z., 21, 245258, doi:10.1127/0941-2948/2012/0319.

    • Search Google Scholar
    • Export Citation
  • Bright, D. R., and S. L. Mullen, 2002: Short-range ensemble forecasts of precipitation during the southwest monsoon. Wea. Forecasting, 17, 10801100, doi:10.1175/1520-0434(2002)017<1080:SREFOP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., M. Milleer, and T. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125, 28872908, doi:10.1002/qj.49712556006.

    • Search Google Scholar
    • Export Citation
  • Cohen, B. G., and G. C. Craig, 2006: Fluctuations in an equilibrium convective ensemble. Part II: Numerical experiments. J. Atmos. Sci., 63, 20052015, doi:10.1175/JAS3710.1.

    • Search Google Scholar
    • Export Citation
  • Craig, G. C., and B. G. Cohen, 2006: Fluctuations in an equilibrium convective ensemble. Part I: Theoretical formulation. J. Atmos. Sci., 63, 19962004, doi:10.1175/JAS3709.1.

    • Search Google Scholar
    • Export Citation
  • Dierer, S., and Coauthors, 2009: Deficiencies in quantitative precipitation forecasts: Sensitivity studies using the COSMO model. Meteor. Z., 18, 631645, doi:10.1127/0941-2948/2009/0420.

    • Search Google Scholar
    • Export Citation
  • Done, J. M., G. C. Craig, S. L. Gray, P. A. Clark, and M. E. B. Gray, 2006: Mesoscale simulations of organized convection: Importance of convective equilibrium. Quart. J. Roy. Meteor. Soc., 132, 737756, doi:10.1256/qj.04.84.

    • Search Google Scholar
    • Export Citation
  • Groenemeijer, P., and G. Craig, 2012: Ensemble forecasting with a stochastic convective parametrization based on equilibrium statistics. Atmos. Chem. Phys., 12, 45554565, doi:10.5194/acp-12-4555-2012.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., C. Snyder, and R. E. Morssroenemeijer, 2000: A comparison of probabilistic forecasts from bred, singular-vector, and perturbed observation ensembles. Mon. Wea. Rev., 128, 18351851, doi:10.1175/1520-0493(2000)128<1835:ACOPFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and J. M. Fritsch, 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47, 27842802, doi:10.1175/1520-0469(1990)047<2784:AODEPM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Keane, R., and R. Plant, 2012: Large-scale length and time-scales for use with stochastic convective parametrization. Quart. J. Roy. Meteor. Soc., 138, 11501164, doi:10.1002/qj.992.

    • Search Google Scholar
    • Export Citation
  • Keane, R., G. C. Craig, C. Keil, and G. Zängl, 2014: The Plant–Craig stochastic convection scheme in ICON and its scale adaptivity. J. Atmos. Sci., 71, 34043415, doi:10.1175/JAS-D-13-0331.1.

    • Search Google Scholar
    • Export Citation
  • Keil, C., F. Heinlein, and G. C. Craig, 2014: The convective adjustment time-scale as indicator of predictability of convective precipitation. Quart. J. Roy. Meteor. Soc., 140, 480–490, doi:10.1002/qj.2143.

    • Search Google Scholar
    • Export Citation
  • Kober, K., C. Keil, G. C. Craig, and A. Dörnbrack, 2012: Blending a probabilistic nowcasting method with a high-resolution numerical weather prediction ensemble for convective precipitation forecasts. Quart. J. Roy. Meteor. Soc., 138, 755768, doi:10.1002/qj.939.

    • Search Google Scholar
    • Export Citation
  • Kühnlein, C., C. Keil, G. C. Craig, and C. Gebhardt, 2014: The impact of downscaled initial condition perturbations on convective-scale ensemble forecasts of precipitation. Quart. J. Roy. Meteor. Soc., 140, 1552–1562, doi:10.1002/qj.2238.

    • Search Google Scholar
    • Export Citation
  • Lewis, J. M., 2005: Roots of ensemble forecasting. Mon. Wea. Rev., 133, 18651885, doi:10.1175/MWR2949.1.

  • Lin, J., and J. Neelin, 2003: Toward stochastic deep convective parameterization in general circulation models. Geophys. Res. Lett., 30, 1162, doi:10.1029/2002GL016203.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, C. Marsigli, A. Montani, F. Nerozzi, and T. Paccagnella, 2001: A strategy for high-resolution ensemble prediction. Part I: Definition of representative members and global model experiments. Quart. J. Roy. Meteor. Soc., 127, 20692094, doi:10.1002/qj.49712757612.

    • Search Google Scholar
    • Export Citation
  • Plant, R., and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87105, doi:10.1175/2007JAS2263.1.

    • Search Google Scholar
    • Export Citation
  • Roberts, N., and H. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, doi:10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Scheuerer, M., 2014: Probabilistic quantitative precipitation forecasting using ensemble model output statistics. Quart. J. Roy. Meteor. Soc., 140, 10861096, doi:10.1002/qj.2183.

    • Search Google Scholar
    • Export Citation
  • Selz, T., and G. Craig, 2014: Upscale error growth in a high-resolution simulation of a summertime weather event over Europe. Mon. Wea. Rev., 143, 813–827, doi:10.1175/MWR-D-14-00140.1.

    • Search Google Scholar
    • Export Citation
  • Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction systems. Quart. J. Roy. Meteor. Soc., 131, 30793102, doi:10.1256/qj.04.106.

    • Search Google Scholar
    • Export Citation
  • Stephan, K., S. Klink, and C. Schraff, 2008: Assimilation of radar derived rain rates into the convective scale model COSMO-DE at DWD. Quart. J. Roy. Meteor. Soc., 134, 13151326, doi:10.1002/qj.269.

    • Search Google Scholar
    • Export Citation
  • Teixeira, J., and C. A. Reynolds, 2008: Stochastic nature of physical parameterizations in ensemble prediction: A stochastic convection approach. Mon. Wea. Rev., 136, 483496, doi:10.1175/2007MWR1870.1.

    • Search Google Scholar
    • Export Citation
  • Theis, S., A. Hense, and U. Damrath, 2005: Probabilistic precipitation forecasts from a deterministic model: A pragmatic approach. Meteor. Appl., 12, 257268, doi:10.1017/S1350482705001763.

    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wei, M., Z. Toth, R. Wobus, Y. Zhu, C. H. Bishop, and X. Wang, 2006: Ensemble transform Kalman filter-based ensemble perturbations in an operational global prediction system at NCEP. Tellus, 58A, 2844, doi:10.1111/j.1600-0870.2006.00159.x.

    • Search Google Scholar
    • Export Citation
  • Weusthoff, T., F. Ament, M. Arpagaus, and M. Rotach, 2010: Assessing the benefits of convection-permitting models by neighborhood verification: Examples from MAP D-PHASE. Mon. Wea. Rev., 138, 34183433, doi:10.1175/2010MWR3380.1.

    • Search Google Scholar
    • Export Citation
  • Wilks, D., 2006: Probability forecasts of discrete predictands. Statistical Methods in the Atmospheric Sciences, 2nd ed. Academic Press, 282298.

  • Zepeda-Arce, J., E. Foufoula-Georgiou, and K. Droegemeier, 2000: Space-time rainfall organization and its role in validating quantitative precipitation forecasts. J. Geophys. Res., 105, 10 12910 146, doi:10.1029/1999JD901087.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., N. Bei, R. Rotunno, C. Snyder, and C. C. Epifanio, 2007: Mesoscale predictability of moist baroclinic waves: Convection-permitting experiments and multistage error growth dynamics. J. Atmos. Sci., 64, 35793594, doi:10.1175/JAS4028.1.

    • Search Google Scholar
    • Export Citation
Save