Since April 2007, the numerical weather prediction model, COSMO (Consortium for Small Scale Modelling), has been used operationally in a convection-permitting configuration, named COSMO-DE, at the Deutscher Wetterdienst (DWD; German weather service). Here the authors discuss the model changes that were necessary for the convective scale, and report on the experience from the first years of operational application of the model. For COSMO-DE the ability of the numerical solver to treat small-scale structures has been improved by using a Runge–Kutta method, which allows for the use of higher-order upwind advection schemes. The one-moment cloud microphysics parameterization has been extended by a graupel class, and adaptations for describing evaporation of rain and stratiform precipitation processes were made. Comparisons with a much more sophisticated two-moment scheme showed only minor differences in most cases with the exception of strong squall-line situations. Whereas the deep convection parameterization was switched off completely, small-scale shallow convection was still parameterized by the appropriate part of the Tiedtke scheme. During the first year of operational use, convective events in synoptically driven situations were satisfactorily simulated. Also the daily cycles of summertime 10-m wind and 1-h precipitation sums were well captured. However, it became evident that the boundary layer description had to be adapted to enhance convection initiation in airmass convection situations. Here the asymptotic Blackadar length scale l∞ had proven to be a sensitive parameter.
To provide suitable guidance for severe weather warnings, many national meteorological services operate very high-resolution numerical weather prediction (NWP) systems for their region of interest.
In Germany severe weather during the summer season is often connected to deep convection leading to heavy precipitation and gust fronts, especially if convective cells organize in the form of mesoscale convective clusters or squall lines. The formation, intensity, and propagation of such convective systems, especially those developing ahead of the cold front in the warm sector of cyclones, are only poorly predicted by NWP models that rely on parameterized convection because they are unable to simulate the detailed life cycle of convective clouds and do not properly describe the interaction between the convective and driving larger scales.
During the winter season, synoptic-scale cyclones are mainly responsible for severe weather events like heavy precipitation and gale-force winds, but actual local damages depend on mesoscale properties like frontal intensity and propagation, and on the interaction of the flow with small-scale topography in Germany. Very high-resolution NWP models have the potential of improving the prediction of severe weather as well as of “ordinary” local weather phenomena like land–sea breezes, mountain waves, and heat island effects.
The Deutscher Wetterdienst (DWD; German weather service) developed a new convective-scale NWP model with a horizontal grid spacing of 2.8 km from 2003 to 2006. The operational 7-km mesoscale model, Lokal Modell (LM; Steppeler et al. 2003), now called the COSMO-EU, with parameterized deep convection served as basis of the development activities in the fields of numerical algorithms, physical parameterization and data assimilation. The Consortium for Small Scale Modelling (COSMO; see online at http://www.cosmo-model.org/) is a group of meteorological services from Germany, Greece, Italy, Poland, Romania, Russia, and Switzerland that pool their research and development resources in the field of regional NWP. After 1 yr of preoperational trials the convective-scale model COSMO-DE became operational in April 2007 providing 21-h forecasts 8 times a day.
Similar model developments have been pursued in recent years by a number of research institutions and meteorological services worldwide. The Weather Research and Forecasting (WRF) model, developed at the National Center for Atmospheric Research and the National Centers for Environmental Prediction, is a comprehensive mesoscale modeling infrastructure that includes a convective-scale component, too (Weisman et al. 2008; Skamarock and Klemp 2008). The Japanese Meteorological Agency (JMA) nonhydrostatic model (NHM; Saito et al. 2006) has a grid spacing of 5 km and a rapid update cycle providing 8 forecasts daily. In Europe, the Met Office recently introduced a convective-scale version of the Unified Model (Staniforth and Wood 2008) with a grid spacing of 1.5 km, and Météo-France put the convective-scale model, Applications of Research to Operations at Mesoscale (AROME; Bouttier 2009) including a three-dimensional variational data assimilation (3D-Var) suite with a grid spacing of 2.5 km into operation in December 2008.
In operational NWP applications, the grid spacing is often dictated by operational constraints (e.g., available computing resources and production schedules). Therefore, we do not investigate different model resolutions in this paper, but discuss the behavior of the model in this specific configuration. Nevertheless, the resolution dependency is an important aspect when simulating deep convection (Weisman et al. 1997; Bryan et al. 2003; Roberts and Lean 2008; Weisman et al. 2008; Schwartz et al. 2010).
In section 2 we outline the dynamical core and physical parameterizations of the convective-scale model COSMO-DE. Verification results of the first three years of the operational application of COSMO-DE are presented in section 3. The distinct sensitivity of convective-scale models to the structure of the atmospheric boundary layer is discussed in section 4, and some sensitivities to cloud microphysics are shown in section 5. A summary and conclusions are given in section 6.
2. Recent improvements of the COSMO model for the convective-scale NWP
a. Dynamics and numerics
1) Runge–Kutta dynamical core
In addition to the so-called leapfrog scheme (Klemp and Wilhelmson 1978) and a semi-implicit solver (Thomas et al. 2000), a new dynamical core (i.e., a solver of the compressible Euler equations) is available in the COSMO model and operationally used in COSMO-DE. The new dynamical core uses the time-splitting approach of Wicker and Skamarock (2002). The motivation for introducing this approach came from the fact that for a horizontal grid spacing of 2.8-km depth, convection is only marginally resolved. Therefore, a numerical solver is needed that produces less numerical noise at the grid scale than the centered-difference discretizations of the leapfrog scheme.
The prognostic equations for the spherical wind components (u, υ, and w) and for the deviations of temperature and pressure T′, p′ from a stationary, hydrostatic base state are split into a slow and a fast part. The slow part consists of the advection and Coriolis terms and tendencies from the physical processes. The fast parts are the pressure gradient terms and the working terms in the T′ and p′ equation, thus leading to sound expansion, and the buoyancy terms, together leading to the expansion of gravity waves. In the time-splitting approach of Wicker and Skamarock (2002) the tendencies of the slow processes are added in each substep of the fast processes, sometimes called subcycling. Their three-stage Runge–Kutta scheme (RK3WS) is extended by an implicit vertical advection calculation in the following way. Starting from the fields Φn at time step n, first the implicit scheme for is solved:
Here Aλ and Aϕ are spatial discretizations of the horizontal advection in λ- and ϕ-direction, respectively, for which an upwind scheme of fifth order is used. The Az denotes a vertical advection spatial operator. The P(Φn) contains the tendencies of the physical parameterizations that are calculated once outside of the RK scheme. Then the first RK substep over the time interval Δt/3 is performed by advancing the fast parts starting from the state Φn and integrating them from time t to t + Δt/3 while holding the slow process tendency L(Φn) constant. The result is a new state Φ*.
In the second step we solve
and perform the second RK substep over the time interval Δt/2 by advancing the fast parts starting again from the state Φn and integrating them from time t to t + Δt/2, while holding the tendency L(Φ*) constant. The result is a new state Φ**.
In the third step we solve
leading to the third RK substep, which means that the fast parts are integrated from t to t + Δt starting from Φn, while holding the tendency L(Φ**) constant. This finally results in the new state Φn+1.
The RK3WS scheme mainly stabilizes the upwind advection. RK3WS itself is formally only second-order accurate in time and is of third order only for linear problems. But one can show that RK3WS together with the fifth-order advection is one of the most effective advection schemes among all combinations of an sth stage RK and an mth-order upwind or centered difference advection operator (Baldauf 2008b). A rough measure for this effectiveness is the “effective Courant number” Ceff: = C/s, which increases with the maximum allowable advection Courant number C and decreases by the stage s of the RK scheme. For the RK3WS–upwind fifth-order Ceff = 1.43/3 = 0.47. CPU time measurements of the whole model show that the run time is roughly comparable to the former leapfrog scheme (with C = 1). The increase of the advection Courant number C = 1.43 compensates the higher computational costs for the three RK steps. Additionally the larger big time step leads to a less frequent call of the physics packages.
The vertical advection is spatially discretized by second-order centered differences (denoted by Az). In a convection-resolving model high vertical velocities with Courant numbers much higher than one often occur, therefore we use an implicit Crank–Nicholson scheme (β = ½) in the RK framework. Unfortunately the implicit vertical advection reduces the stability of the whole time-split RK scheme. The stability can be enlarged again by a mixing of the initial state Φn to the “explicit” state [which is Φ* in Eq. (2)] by the weighting α = 1. This weighting leads to a certain deformation in the vertical advection process, but proved to be stable in a broad range of vertical advection Courant numbers.
The detailed subcycling methodology is described in Baldauf (2010), together with an extensive stability analysis of the whole time-splitting scheme. It is shown there that the temporal order is slightly reduced to less than second order due to the time splitting and is stronger reduced due to the implicit vertical advection. This indicates, too, that a better solution for vertical advection should be found.
A further extension of the new dynamical core is the possibility to abandon the shallow atmosphere approximation: now also metrical terms of the velocity advection (e.g., uwtanϕ/r) and Coriolis terms (e.g., 2Ωwcosϕ) that contain the vertical velocity can be switched on. Whereas the advection terms seem to have no significant influence in limited-area models, this statement is not that clear for the Coriolis terms. At least in the vicinity of the equator, these terms are even bigger than the traditionally used terms.
2) Scalar transport
For the advection of scalar fields (i.e., the water constituents) two different schemes are available. One can choose a fully three-dimensional semi-Lagrangian scheme (Staniforth and Côté 1991) whose backward trajectory is calculated with second-order accuracy (Baldauf and Schulz 2004). Despite the fact that this advection scheme is not mass conserving per se, it has rather good conservation properties because of the tricubic spatial interpolation, which can be implemented in a rather efficient way in the transformed coordinate system. But for positive definite variables, a clipping of negative values is necessary, which worsens the conservation. At least global conservation can be achieved by a simple multiplicative filling technique (Rood 1987). The other choice is a three-dimensional extension of the Bott scheme (Bott 1989) in a mass-consistent manner (Easter 1993) and usable for Courant numbers C > 1 (Skamarock 2006). This scheme has nearly perfect conservation properties for C < 1 (Förstner et al. 2006), but has some stability difficulties in steep terrain. Nevertheless, we operationally use this scheme for scalar transport.
3) Idealized test cases
To demonstrate the quality of the dynamical core, we present the results for two standard test cases for nonhydrostatic models.
The first test case is an unstationary, nonlinear, two-dimensional density current (Straka et al. 1993). In a steady, dry adiabatic stratified atmosphere with a potential temperature of Θ = 300 K, an elliptic cold bubble with a maximum extension of 8 km horizontally and 4 km vertically, centered in 3-km height, is initialized up to 15 K colder than the surrounding atmosphere. The generation of arbitrary small structures is suppressed by diffusion with a constant diffusion coefficient K = 75 m2 s−1. A grid-convergent reference solution for Θ′ = Θ − Θ0 based on an ongoing reduction of the grid resolution with an elementary solver is shown after 900 s in Fig. 1 (bottom). One recognizes the propagation of a surge line and the generation of Kelvin–Helmholtz instabilities connected with the density current. In Fig. 1 (top) the comparison with the Runge–Kutta third-order time-integration scheme and for different resolutions of Δx = Δz = 400, 200, 100, 50 m is presented.
The second test case proposed by Schär et al. (2002) is a linear, stationary 2D flow over a modulated Gaussian hill:
where b = 5 km and d = 4 km. The atmospheric conditions are inflow velocity of U0 = 10 m s−1, a constant Brunt–Väisälä frequency N = 0.01 1 s−1, and a surface temperature of T(z = 0) = 288 K. The maximum height of the mountains is only h0 = 25 m; it is reduced by a factor of 10 compared to Schär et al. (2002). This results in a small inverse vertical Froude number of 1/Fr = Nh0/U0 = 0.025 and therefore allows the application of a linearized solution. The grid mesh size is Δx = 500 m, Δz = 300 m with a time step of Δt = 8 s as in Schär et al. (2002). With 80 vertical levels, the upper model boundary lies in z = 24 km. The upper relaxation zone starts in z = 13 km with a thickness of 11 km. Such a thick relaxation zone is crucial to damp out vertically propagating perturbations to be able to properly compare with an analytic solution. Simulation results after 24 h and an appropriate analytic solution (Baldauf 2008a) are shown in Fig. 2. The similarity with the analytical solution is very close. A second run with higher resolution (Δx = 250 m, Δz = 200 m) does not show significant changes. Therefore with Δx = 500 m, a good convergence has been achieved already.
b. Model physics
For the convective-scale application several extensions and modifications of the physical parameterizations have been developed within the COSMO model.
1) Cloud microphysics
The explicit simulation of deep convection poses a special challenge for the microphysical parameterization as the cloud microphysics of convective storms is much more complicated than in stratiform clouds. Therefore the microphysics scheme of the COSMO model has been extended to include at least riming processes (graupel formation), that is, the new scheme is formulated as a Lin-type one-moment cloud microphysics scheme that predicts cloud water, rainwater, cloud ice, snow, and graupel (Lin et al. 1983; Reinhardt and Seifert 2006). For graupel an exponential size distribution is assumed:
where N0,g = 4 × 106 m−4 (Rutledge and Hobbs 1984). For the mass-size relation of graupel particles a power-law relation is used as (with Dg in m and mg in kg). The terminal fall velocity of single graupel particles as functions of size follows the empirical relation (with Dg in m and υg in m s−1). Both relations have been taken from Heymsfield and Kajikawa (1987) and resemble a relatively low-density graupel with moderate fall speed. Graupel can be formed by freezing of raindrops, including freezing by collisions of raindrops and cloud ice, and by conversion of snow to graupel due to riming. For the latter a threshold cloud water mixing ratio of 0.2 g kg−1 is assumed.
Many observational studies have shown that the raindrop size distribution should be parameterized by a Gamma distribution:
with a dimensionless shape parameter μ. In contrast to an NWP model with parameterized deep convection, the microphysics scheme of a convective-scale model with explicit convection has to be able to describe both stratiform and convective rain. Therefore, choosing the values for N0,r and μ in a one-moment scheme involves a compromise. Based on several sensitivity studies we assume μ = 0.5 and N0,r = 3.96 × 107 m−7/2 [Ulbrich 1983, his Eq. (27)], which resulted in the best monthly accumulation of surface precipitation, reasonable convective cold pools, and the best verification scores for surface wind gusts. The chosen values also correspond well with the findings of Schlesinger et al. (1988) who suggest μ = 0.4 as optimum value based on an evaluation of evaporation of raindrops using radar data.
Other parts of the microphysics scheme of the COSMO model have also been updated recently [e.g., the Kessler-type autoconversion–accretion scheme has been replaced by the parameterization of Seifert and Beheng (2001), which is reduced to a one-moment scheme by assuming a constant cloud droplet number concentration of Nc = 5 × 108 m−3 and a constant shape parameter νc = 2].
To improve the prediction of wintertime precipitation, the snow microphysics has been revised based on measurements of Field et al. (2005). A new parameterization of the intercept parameter N0,s of the exponential snow size distribution:
has been introduced as
with αs = 0.038, temperature T, and snow mixing ratio qs. The functions as(T) and bs(T) are given by Table 2 of Field et al. (2005). This parameterization is used instead of the constant N0,s = 8 × 105 m−4, which was used in the previous version of the scheme. Especially at cold temperatures this leads to a much higher intercept parameter, which corresponds to smaller snowflakes at high levels, which fall out much slower. In addition, the geometry of snow has been changed to more dendritelike habit with a mass–diameter relation of ms = αsD2 with αs = 0.038 and a terminal fall velocity of (with Ds in m, ms in kg, and υs in m s−1).
2) Turbulence parameterization
The turbulence parameterization is similar to the level-2.5 scheme of Mellor and Yamada (1982; i.e., the equations for all second-order moments are taken into account but without their time derivatives and transport terms; Raschendorfer 2001). Main differences are the use of variables that are conserved under moist adiabatic processes: total cloud water qt = qυ + qc and liquid water potential temperature Θl = Θ − Lcqc/(cpdΠ) with the Exner function . Furthermore, a so-called “circulation term” has been included. It appears as a result of formal-scale separation of small-scale turbulence (being in accordance with the turbulence closure assumptions) from larger-scale, coherent circulation structures [being nonturbulent but still subgrid scale (s.g.s.)]. The circulation term is an example of general scale interaction terms being generated due to the nonlinearity of shear terms, as soon as second-order model equations are derived based on a double-filter process (one for the separation scale and one for the grid scale). In the turbulent kinetic energy (TKE) equation, it describes the transfer of nonturbulent s.g.s. kinetic energy toward TKE by the action of shear produced by larger-scale s.g.s. motions. These motions get their kinetic energy from the circulation-scale pressure production being the covariance of velocity vector and pressure gradient, which both have been filtered by the separation scale before. We have implemented only a rather crude approximation of this term representing the TKE source from buoyancy-driven nonturbulent s.g.s. flow patterns, induced by thermal inhomogeneity at the lower boundary that may be forced by s.g.s. orography or patches of different land use. However, it prevents TKE from vanishing completely in strongly stable situations without any vertical wind shear. Even this extended set of closed second-order equations results in a flux gradient representation for vertical flux densities with diffusion coefficients KM = lSMq and KH = lSHq for momentum fluxes and scalar fluxes, respectively, when applying the horizontal boundary layer approximation. The is a measure for the turbulent velocity scale. Here l denotes the turbulent master length scale being an integral value representing the whole turbulent spectrum in the parameterization of pressure correlation terms and dissipation. It grows linearly in the boundary layer close to a wall and is limited by an asymptotic maximum l∞ according to Blackadar (1962):
where κ is the von Karman constant and z is a shifted vertical distance (being equal to the roughness length at the lower boundary). Here l∞ limits the length scale of isotropic eddies still being a part of the turbulent spectrum. By substitution of q in the closed and simplified second-order equations, the so-called stability functions SM and SH appear to be the solutions of the following two linear equations:
where the following standard closure parameters are used: αM = 0.92, αH = 0.74, cM = 0.08, cH = 0, αMM = 16.6, and αHH = 10.1. They contain the dimensionless total shear forcing:
where describes the shear forcing of s.g.s. nonturbulent circulation. Furthermore,
is the dimensionless buoyancy forcing of TKE. A hat means density-weighted turbulent averaging, in contrast to the usual averaging denoted by a bar . In the above equations
are used to express the vertical gradient of the virtual potential temperature in terms of conserved variables and
denotes the slope of the water vapor saturation function:
Which is the virtual correction factor, and rc denotes the saturation fraction (cloud cover). Here Rd, Rυ, denote the gas constants of dry air and water vapor, respectively; Lc is the latent heat; and cpd the heat capacity of dry air.
Finally the TKE equation reads
with the further diffusion coefficient Kq = lSqq and Sq ≈ 0.2. The terms on the right-hand side describe vertical diffusion, shear production, “circulation” production, buoyancy production, and dissipation of TKE, respectively, whereas the advection term is neglected up to now.
Since the solution of Eqs. (10) and (11) has a singularity for large negative GH, this solution is only used for stable stratification where it can be applied without restrictions. However, in the nonstable case, we use a different solution that can be obtained if q is expressed by a quasi-diagnostic TKE equation:
where γ is taken from the previous time step. Inserting this in the linear Eqs. (10) and (11) results in a level-2.0-type solution. Although this solution does not have any singularities for nonstable stratification, it does not exist for overcritical Richardson numbers . Thus, we need to apply different solutions for different stratification regimes.
This one-equation TKE closure must be complemented by a surface-to-atmosphere transfer scheme describing the fluxes of momentum, heat, and moisture at the bottom boundary. For that purpose we use a special development closely related to our turbulence scheme estimating transport resistances between the rigid surface of the earth and the lowest model level. Assuming this transfer layer to be a constant flux layer, the resistances are vertical integrals of the reciprocal diffusion coefficients between these two levels. We do the integration by applying our turbulence scheme to the bottom boundary of the atmospheric model (defined to be just above the laminar layer as well as the roughness layer of the rigid surface) and performing an interpolation of the turbulent velocity scales qSH and qSM between the bottom level and the lowest level where the turbulence scheme provides these values. For momentum we use only this turbulent resistance, whereas for scalars an additional roughness layer resistance (including a laminar part) is calculated depending on roughness length and a surface Reynolds number. Vertical gradients at the bottom level are calculated from dimensionless resistance values of the previous time step. This transfer scheme does not make use of empirical profile functions, rather it is mainly an application of the turbulence scheme and benefits from all the generalizations implemented in that scheme.
3) Shallow convection
Although we assume that precipitating deep convection may be explicitly simulated by the model, the small-scale shallow convection has yet to be parameterized. Shallow convection is especially important because of its contribution to the heat and moisture transport by nonlocal fluxes. In principle this could be treated as part of the PBL scheme; see, for example, the combined eddy–diffusivity mass–flux model of Siebesma et al. (2007), but for the COSMO model such an extended PBL scheme is not yet available. Instead a simplified mass–flux convection scheme based on the parameterization of Tiedtke (1989) is used. The original Tiedtke scheme by itself distinguishes three types of convection—deep, midlevel, and shallow—and the part for shallow convection can easily be extracted and used as a parameterization of shallow convection alone. To do so, we had to define a certain threshold for the cloud depth of shallow convection, which is set arbitrarily to Δp = 250 hPa. Experiments and case studies showed that such a parameterization of shallow convection is, on the one hand, crucially necessary to avoid an overprediction of boundary layer clouds in certain situations (Doms and Förstner 2004). On the other hand, the lack of interaction between the PBL scheme, the shallow convection scheme, and the explicit deep convection seems to result in a suppression or underestimation of deep convection in some cases. This is due to the fact that the vertical moisture transport by the parameterized shallow convection leads to a decrease in surface CAPE, and, because of the decoupling of parameterized shallow and explicit deep convection, the transition from shallow to deep convection can currently not be represented in a realistic way in the model. But although the shallow-to-deep convection transition is maybe poorly represented, the shallow convection scheme acts to moisten the environment above the boundary layer and, by doing so, paves the way for deep convection.
4) Other physical parameterizations
The radiation scheme in the COSMO model is a two-stream parameterization (Ritter and Geleyn 1992) which, for our 7-km model version COSMO-EU, is calculated only once per hour and the tendencies are then linearly interpolated in time. To increase the temporal resolution to 15 min at the same computational cost, the radiation scheme can now, as an option, be calculated on a coarser grid (i.e., 2 × 2 grid points are averaged before calling the radiation scheme), and the resulting radiation fluxes are then used after rescaling them with the surface properties of the individual grid points.
3. The operational COSMO-DE
a. Model setup
With 421 × 461 grid points the COSMO-DE domain (Fig. 3) covers about 1200 × 1300 km2 of central Europe including Germany, Switzerland, Austria, the Netherlands, Belgium, some parts of the neighboring countries, and most of the Alps. It uses 50 model layers with a stretched vertical grid. The lowest level is placed in 10 m above ground, and the model top lies at 22 km above mean sea level. The main goal of COSMO-DE is to provide very short-range numerical forecasts of severe weather. Therefore a rapid update cycle of 21-h forecasts is used with new forecasts every 3 h. Because of the short data cutoff time of 30 min, which is used for COSMO-DE, it is possible to complete the 0000 UTC forecast before 0100 UTC using 12 NEC SX-9 vector processors (a single main run takes only 19-min wall-clock time).
The data assimilation system of COSMO-DE uses a nudging approach that includes the assimilation of high-resolution radar data by latent heat nudging (Stephan et al. 2008).
Boundary conditions for COSMO-DE are provided by a 7-km COSMO model (COSMO-EU), which itself is nested into a global model, the GME (Majewski et al. 2002), which currently runs with 30-km grid spacing. Note that those driving models are lagging 3 h behind (i.e., the 0000 UTC COSMO-DE run has boundary conditions from a COSMO-EU run based on the 2100 UTC analysis).
Convective-scale configurations of the COSMO model similar to COSMO-DE are now used for operational NWP in Switzerland and Italy. (The configuration of the national COSMO setups are available online at http://www.cosmo-model.org/content/tasks/operational/default.htm.)
b. Examples and verification of operational forecasts
Since its operational implementation in April 2007, the convective-scale COSMO-DE has proven its ability to forecast deep convection successfully, especially severe convective storms that are associated with frontal systems (forced convection). For example, COSMO-DE has performed quite well for many heavy convection events during 2007 including the explicit simulation of supercell storms. As an example, Fig. 4 shows the observed versus model-derived radar reflectivities at 1700 UTC 15 June 2007, depicted is the 0000 UTC + 17 h operational forecast. Radar reflectivity is calculated in Rayleigh approximation (Sauvageot 1992; Seifert and Beheng 2006b), which is sufficient for a semiquantitative validation of a microphysics scheme that does not predict hail. On this day a cold front triggered severe convection along the front, especially in southern Germany, and also prefrontal convection occurred. Heavy hail was observed especially in southern Germany, and overall four tornados were reported in Germany including an F2 tornado near the city of Bremen, which was probably caused by a long-lived prefrontal supercell. Although the COSMO-DE forecast shown in Fig. 4 is not able to predict individual convective cells in a deterministic sense, it gives a good and sufficient guidance when and where severe convection might occur. For example, the COSMO-DE forecast simulates long-lived supercells in northern Germany.
As another typical example the case of 25 April 2008 is shown in Fig. 5. During the night from 24 to 25 April a cold front passed Germany, clearly visible in the 0300 UTC observed radar reflectivity, and well represented in the COSMO-DE forecast. During the following day scattered postfrontal convection developed over Germany, when the cold maritime air was advected over land. In the radar observations these smaller-scale shallow cumulus and cumulus congestus clouds are visible by their resulting precipitation as reflectivities up to 40 dBZ, but most of the signals are much weaker. The COSMO-DE model is to some extent able to simulate this postfrontal convection, but it simulates only a few larger convective cells, and the postfrontal convection is not widespread enough compared to the observations. On the one hand it is encouraging that the model, even at this relatively coarse resolution compared to the size of the convective cells, is actually able to simulate this cloud regime without an additional artificial trigger. On the other hand, the underestimation of the spatial extent is an obvious problem in this forecast, which is most likely due to the insufficient horizontal resolution.
Systematic verification of forecasts of the three models GME (30-km grid spacing), COSMO-EU (7 km), and COSMO-DE (2.8 km) against high-resolution hourly surface observations over a season provides quantitative estimates of the gain in simulation quality with improved model resolution.
Figure 6 compares the observed mean diurnal cycle of wind speed at 10 m for a region with complex orography, here the Black Forest Mountains, in southwestern Germany at about 25 SYNOP stations with forecasts by GME, COSMO-EU, and COSMO-DE for the period June to August 2009. The high-resolution COSMO-DE performs best in the prediction of the increase of the mean wind speed by about 1.4 m s−1 from the early morning minimum to the afternoon maximum probably because the COSMO-DE orography matches the real one most closely and the diurnal evolution of the planetary boundary layer with increased downward momentum transport during daytime is captured well.
Another distinct improvement has been shown in the forecast of lee waves (e.g., for gliding forecasts). Lee waves (or resonance waves) have typical wave lengths in the order of 10 km and therefore can now be simulated with COSMO-DE but not with the 7-km COSMO-EU.
In summer, the observed mean diurnal cycle of hourly precipitation rates in southwestern Germany (Fig. 7) is characterized by two maxima: one between 0700 and 0800 UTC, the other between 1500 and 1600 UTC, and a distinct minimum in between, which are mainly due to the temporal evolution of deep convection. Models with parameterized deep convection like GME and COSMO-EU tend to simulate a single maximum close to local noon, instead, several hours ahead of the observed maximum. But COSMO-DE, which is able to simulate the life cycle of deep convection explicitly, predicts hourly precipitation rates reasonably close to reality with a phase delay of only about 1 h.
In winter, the horizontal distribution of monthly accumulated precipitation is strongly modulated by the complex structure of the lower mountain ranges in Germany, which favor precipitation maxima on the upwind side and mountain tops and minima in the lee (Fig. 8). COSMO-DE is able to represent the orographic patterns, but shows an overestimation of precipitation amounts, which is most visible at the mountain tops. This overestimation does occur for all events with more than 5 mm (24 h)−1 as shown by the precipitation distribution function in Fig. 9. This mass-weighted distribution is defined as P(R) = NR p(R), where p(R) is the probability density function of events with precipitation threshold R, and N is the total number of all events. Note that the 24-h accumulation of the model used in this distribution is a sum of the 6–18-h forecasts from 0000 to 1200 UTC. The overestimation of wintertime precipitation is a well-known problem of all operational implementations of the COSMO model (Dierer et al. 2009). Although some of the difference might be attributed to uncertainties in the observations (e.g., this dataset does not apply a correction of undercatchment due to high wind speed or snowfall) some problems with wintertime precipitation still remain to be solved even at the grid spacing of 2.8 km of COSMO-DE.
4. Boundary layer sensitivities
Although COSMO-DE has performed quite well for many events, it has also shown some weaknesses. The most prominent problems have been deficiencies in forecasting moist convection in weakly forced situations (e.g., summertime airmass-type convection or small-scale orographically-induced deep convection). These events have often been underestimated or completely missed by the COSMO-DE forecasts. A detailed investigation and extensive sensitivity studies have shown that the problem of missing explicit convection was often caused by a too stable or too cold boundary layer.
For convective-scale NWP, the parameterization of the planetary boundary layer (PBL) is of great importance. PBL schemes in operational NWP models are often well tuned for a near-neutral boundary layer, but may show some deficiencies for convective PBLs. This can lead to serious problems in a convective-scale NWP model that has to predict the initiation of deep convection explicitly. We will show some long-term sensitivity studies of the effects of certain modifications of the COSMO level-2.5 TKE scheme and discuss the impact on the diurnal cycle of convection and quantitative precipitation forecasts.
Although deep convection is simulated explicitly on the 2.8-km grid, the PBL needs a subgrid parameterization in all NWP models as described in section 2. As pointed out by Skamarock and Klemp (2008) and others, there are some issues to derive an appropriate PBL scheme for grid sizes between 100 m and 5 km. In this range of grid sizes the two currently available parameterization approaches with a widely accepted theoretical basis, Reynolds averaging and large-eddy simulation (LES), do not truly apply. In operational NWP an additional problem comes in since the operational schemes have been developed and “tuned” over the last decades for considerably larger grid spacings of 7–20 km. At these resolutions the deep convection needs to be parameterized and this—to some extent—decouples the initiation of convection from the actual PBL structure, meaning that a convection parameterization can forecast convection at a certain grid point even if the predicted PBL structure itself would be too stable. A convective-scale model, like COSMO-DE, is not as forgiving as a coarser model when it comes to the representation of the PBL. Especially for airmass convection, COSMO-DE requires a dry unstable convective boundary layer to initiate deep convection and simulate the subsequent precipitation. This poses a great challenge for operational convective-scale NWP models.
One remarkable sensitivity that we found in the COSMO-DE simulations is that the asymptotic length scale l∞ in the classic Blackadar–Deardorff formulation of the turbulent mixing length l, Eq. (9), often determines when and where convection is initiated in the forecast. The standard value used in the COSMO model TKE scheme is l∞ = 200 m, which is not an unusual large value in NWP models [e.g., the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System (IFS) model uses l∞ = 150 m operationally]. Reducing the asymptotic mixing length leads to less vertical mixing, larger vertical gradients and a more unstable PBL, which can obviously help to initiate deep convection. This is shown in Fig. 10 for the case of 9 June 2007, a situation when severe convection occurred over the northwestern part of Germany, which was probably initiated by a prefrontal convergence line. In the control run using l∞ = 200 m (Fig. 10b), the prefrontal convection is missing almost completely while the sensitivity run with the reduced l∞ = 60 m (Fig. 10c) predicts an intense prefrontal convective system. The reduction of l∞ clearly fosters the initiation of convection and can help to improve the precipitation forecast.
Another part of the PBL scheme that affects the initiation of convection is the parameterization of subgrid-scale cloudiness, which is used to transform from the prognostic model variables (temperature T and water vapor mixing ratio qυ) to the moist conserved variables (liquid water potential temperature θl and total water content qt). To estimate the subgrid cloud cover and the subgrid contribution to qt, a statistical cloud scheme is applied (Sommeria and Deardorff 1977, hereafter SD77). In the operational version the formulation differs slightly from the SD77 formulation (e.g., subgrid clouds are assumed to form earlier, which compares better with satellite observations). In Fig. 10d we used the original SD77 values and for this case this leads to a better initiation and an intensification of the prefrontal convection.
To help to understand the reason for the missing initiation of prefrontal convection in the operational setup, Fig. 11 shows profiles of temperature and moisture at the location of Emden, near the border between Germany and the Netherlands. The observed sounding is very humid up to 700 hPa, and shows a typical summertime structure with high surface temperature, a shallow mixed layer with a small inversion at 900 hPa, and high CAPE. In the simulation using the standard PBL setup, the most obvious error is a pronounced inversion at 500 hPa, which suggests that the TKE scheme is mixing the layers below that. It is not clear though how important this inversion is in suppressing the deep convection. Probably more important is that the small PBL inversion at 900 hPa is also slightly stronger, and the surface temperature is slightly lower than observed. The profiles of the simulation using the modified TKE scheme are in much better agreement which the observed sounding. The erroneous inversion at midlevels is removed, and the inversion at 900 hPa is smoother than in the original COSMO-DE forecast. Although it is probably not possible to explain the difference between the two simulations just by these profiles, it confirms that the modified PBL scheme leads to profiles that are more realistic and more likely to support deep convection.
This case study is only one example that we have chosen to elucidate possible mechanisms how the changes in the boundary layer parameterization may affect the vertical structure and temporal evolution of the PBL. Especially when boundary layer clouds and inversions are involved the possible feedbacks and phenomena are complicated and numerous. For example, in some cases the stronger mixing of the original formulation with l∞ = 500 m may remove an inversion more efficiently compared to the version with reduced diffusion. Also shallow convection can have a decisive effect on the initiation of deep convection. For example, turning off the shallow convection parameterization will often lead to a moister PBL that, on one hand, supports deep convection better. On the other hand, a moister PBL may develop boundary layer clouds that efficiently reduce the incoming solar radiation and suppress deep convection. Which chain of processes dominates on average, can only be judged from an experiment over many weeks.
This experimental configuration of the PBL scheme, l∞ = 60 m with the original SD77 subgrid cloudiness, has then been tested in further experiments running the full COSMO-DE system including data assimilation in a hindcast mode. Here we will show results for the period 1 May–30 June 2008. This period was typical for late spring and early summer in Germany with many days being characterized by weakly forced nonequilibrium deep convection. Figure 12 shows averaged Hovmöller diagrams of precipitation frequency for the whole period of 2 months. These diagrams where created by counting the number of events defined by the precipitation rate, averaged in y direction (north–south), exceeding a certain threshold of 0.05 mm h−1. Since the COSMO model uses a rotated latitude–longitude grid, the y direction does not exactly correspond to a longitude. The domain shown in the Hovmöller diagram is a subdomain of COSMO-DE that is covered by the German radar network, the westernmost points corresponds roughly to a north–south line through Emden, Koblenz, and Freiburg, Germany, while the easternmost part is a transect from Salzburg, Austria, through Chemnitz, Germany, and Potsdam to Greifswald, Germany. As expected the radar data shows a clear diurnal mode with a maximum precipitation frequency between 1300 and 1600 UTC, which obviously corresponds to the afternoon precipitating convection. This afternoon convective mode is somewhat more pronounced in the western part of the domain.
The corresponding Hovmöller diagram from the operational COSMO-DE shows the model deficiencies regarding the initiation of convection very clearly. An afternoon maximum in precipitation frequency is not visible in the simulations; instead the maximum precipitation frequency occurs erroneously during nighttime. Here we have to add that the chosen precipitation threshold of 0.05 mm h−1 used for these Hovmöller plots is very low (i.e., it also includes drizzle events). A detailed inspection of the COSMO-DE forecasts reveals that the nighttime precipitation is actually dominated by drizzle from nocturnal boundary layer clouds. Those clouds are mainly an artifact of a too strong vertical turbulent mixing in the entrainment and residual layer. In the hindcast experiment using the modified PBL scheme in COSMO-DE the nocturnal drizzle is significantly reduced, and an afternoon convective mode starts to emerge. Although the convective mode is still much too weak, the simulations are clearly improved compared to the operational setup. Even the decrease in precipitation frequency from west to east seems to be captured by the model. More detailed case studies (not shown here) actually reveal that both problems—the overestimation of nocturnal boundary layer clouds and the underestimation of deep convection in the afternoon—are closely coupled in COSMO-DE. An overprediction of nocturnal and early morning cloud layers reduces the incoming solar radiation in the morning, leads to an underestimation and delay of the diurnal cycle of near-surface temperature, and, hence, makes it unlikely that convection is triggered in the afternoon.
For a more quantitative verification the precipitation distribution function, the frequency bias [FBI; Wilks (1995)] and the fractions skill score [FSS; Roberts and Lean (2008)] are useful measures to characterize the performance of the model (Fig. 13). The precipitation distribution proves that COSMO-DE is able to simulate precipitation with similar intensities as observed, but it shows a significant underestimation of the frequency of events exceeding 10 mm (12 h)−1. Using the revised PBL parameters, especially the reduced mixing length, shifts the precipitation distribution to higher intensities, as convection is now initiated more often. This leads to an improved representation of the precipitation distribution for higher intensities, which can also be seen in the frequency bias. For the operational COSMO-DE, the FBI decreases for higher thresholds, dropping from above 0.9 for low thresholds to about 0.6 for a precipitation accumulation of 30 mm (12 h)−1, which proves again the serious underestimation of convective precipitation in the operational COSMO-DE. The COSMO-DE experiment with the modified PBL scheme shows a strikingly different behavior with an increase in FBI for higher threshold values, even exceeding 1.0 for 30 mm (12 h)−1. Although an underestimation of precipitation remains in the experimental configuration of COSMO-DE, the advantage compared to the control run is obvious. This is also confirmed by the fractions skill score, which is calculated here for 21 × 21 grid points (i.e., an area of roughly 60 km × 60 km). The experiment shows better FSS values for all precipitation thresholds, and especially for high thresholds the difference is large with about 0.3 compared to 0.1. Although an FSS of 0.3 is still low, this is a considerable improvement. For convective-scale models one has to be very careful about the statistical significance of verification results. Figure 13 shows the results of a statistical test with the null hypothesis that differences of the scores of both model versions are zero. Using the resampling (bootstrap) procedure of Hamill (1999) suggests that all scores are significant at a 5% level (indicated by the inner error bars–boxes) when the resampling is performed over models only [as suggested by Hamill (1999)]. Even if the resampled distribution is constructed by randomly choosing models and days (outer error bars; i.e., the finite length of the time series is taken into account), the results are still statistically significant. This proves that the two model versions with slightly different PBL formulations are indeed resulting in a different initiation of deep convection and significantly different precipitation forecasts.
More extended verification over two summer periods (not shown here) reveals that the modified PBL scheme slightly deteriorates the forecasts of large-scale stratiform (e.g., frontal, precipitation, and, more important, it introduces a significant warm temperature bias in the lower troposphere, but the improved forecasts of deep convection outweigh those disadvantages). Therefore, the experimental setup with the modified PBL parameters was introduced in the operational COSMO-DE on 10 September 2008 and has proven its usefulness since then.
5. Cloud microphysical sensitivities
The explicit simulation of deep convection poses a special challenge for the microphysical parameterization as the cloud microphysics of convective storms is much more complicated than in stratiform clouds. Because of the complexity of the microphysics of convective clouds it may even be attractive to use multimoment schemes that are explicitly predicting the number or mean size of hydrometeor species.
In the following we present a sensitivity study comparing the operational Lin-type one-moment microphysics scheme described in section 2 with the more sophisticated two-moment scheme of Seifert and Beheng (2006a). The two-moment scheme has recently been extended and improved; for example, it now applies the nucleation–activation parameterization of Segal and Khain (2006), includes a separate hail category (Blahak 2008), and a new parameterization of evaporation of raindrops (Seifert 2008). The simulations of this sensitivity study are initialized by the operational COSMO-DE nudging analysis, which applies the one-moment scheme.
In general, the 2.8-km resolution COSMO-DE forecasts are quite robust to the microphysical assumptions or the chosen microphysics scheme, which confirms similar findings for convective-scale forecasts using the WRF model (Seifert and Weisman 2005; Weisman et al. 2008). We have simulated June–July–August 2007 with both microphysical schemes, and only for very few cases have we found remarkable differences between one-moment and two-moment representation of the cloud microphysics or, when using the two-moment scheme, the choice of the assumed aerosol or cloud condensation nuclei (CCN) properties.
One such example is 20 July 2007 shown in Fig. 14. On this day an impressive squall line developed along a cold front extending over almost entire Germany. The operational one-moment scheme was able to simulate at least some parts of the squall line quite successfully, although Fig. 14b shows that the spatial extent of accumulated precipitation exceeding 1 mm (24 h)−1 is underestimated. This is significantly improved when using the two-moment scheme, which seems to be able to describe the organization of the squall line and the development of secondary convection better than the one-moment scheme. This is most likely due to a better representation of the cold pool formation by predicting the mean raindrops size which allows a more accurate treatment of evaporation of rain (Seifert 2008; Morrison et al. 2009).
To investigate the impact of CCN assumptions we have tested “low” and “very high” CCN concentrations. The detailed setup is described in Noppel et al. (2010, cf. their Table 1). Assuming clean aerosol conditions in the two-moment scheme (Fig. 14c) results in a somewhat more intense squall line compared to the one-moment scheme. Compared to the polluted aerosol assumptions (Fig. 14d) the precipitation pattern is more localized and spotty in the clean case, while the polluted CCN assumptions yield a somewhat smoother precipitation field. This can easily be explained by the fact that assuming clean aerosol conditions favors a rapid rain formation via warm rain processes while polluted conditions lead to smaller particle sizes, which slows down the precipitation formation and as a consequence more condensate ends up in the stratiform region of the convective system. In this case the polluted assumption leads to a better agreement with the observations. Note that these results are quite sensitive to the detailed assumptions about activation, entrainment, and vertical distribution of aerosols. In addition, this case has a pronounced sensitivity to the initial condition (not shown here).
A more detailed investigation of the microphysical sensitivities and the aerosol–cloud–precipitation effects in COSMO-DE will be presented in a subsequent paper. At present, the benefits from the two-moment microphysical scheme do not outweigh the much higher computational cost in operational NWP. This might be different in other applications, like regional climate modeling, where the aerosol–cloud effects are maybe more important than in short-range NWP applications.
We have presented results of the new operational convective-scale NWP system of DWD. Based on the 2.8-km grid-spacing COSMO-DE model, the system is able to predict deep convection explicitly. It can therefore provide improved forecast guidance about location, timing, and severity of deep convection and improve precipitation forecasts during summertime compared to coarser NWP models that apply a parameterization of deep convection.
To achieve this goal, a new dynamical core has been developed that uses an accurate and efficient Runge–Kutta solver. In addition, several extensions and refinements of the physical parameterizations were necessary.
On the convective-scale a good parameterization of the boundary layer is absolutely crucial for the success of the forecast, especially for the initiation of deep convection. Our first two years of experience with this new NWP system have revealed substantial deficiencies in the simulation of the diurnal mode of convection. Those deficiencies were probably caused by an inappropriate tuning of the PBL (TKE) scheme that resulted in a too active mixing, the formation of excessive nocturnal cloud layers, too stable near-surface stratification during the day, and unrealistic low- and midlevel inversions. Reducing the asymptotic mixing length and the assumptions about subgrid cloudiness in the PBL scheme, leads to a significant improvement, although this parameter tuning cannot yet fully solve the problems related to the PBL scheme. In the future, especially the PBL scheme and here especially the formulation of the mixing length in the cloud-topped boundary layer needs to be considerably improved. An increased vertical resolution might also help, or even be necessary, to correctly represent the PBL. NWP models that apply a convection parameterization are much less sensitive to these PBL assumptions, since the triggering of deep convection is parameterized within the convection scheme. A convective-scale model needs to do the initiation explicitly, which poses a challenge for the boundary layer parameterization.
The cloud microphysics of deep convection is in some sense more complicated than of stratiform clouds and demands a more sophisticated parameterization, maybe even using a multimoment approach. We have found that for most simulated cases the results, at least quantities like a 12-h accumulation of surface precipitation, are quite insensitive to changes in the cloud microphysics (although individual cells and precipitation structures may differ considerably). Most sensitive to microphysical assumptions are probably cases when secondary convection is triggered by the outflow or cold pools of preexisting convective cells. In those cases a more sophisticated parameterization, like the Seifert and Beheng (2006a) two-moment scheme for example, seems to be able to improve the forecast.
Our experience with the operational convective-scale NWP model COSMO-DE indicates that COSMO-DE is able to forecast deep convection explicitly. Since individual convective cells are hardly predictable, the COSMO-DE model will be extended to a convective-scale ensemble prediction system in the near future.
The development of COSMO-DE was only made possible by a team of coworkers. We want to thank Christoph Schraff, Klaus Stephan, Stephan Klink, Kathleen Helmert, and Birgit Hassler for developments in the area of data assimilation; Ulrich Damrath, Claus-Jürgen Lenz, and Peter Prohl for verification of the model; and Ulrich Schättler and Thomas Hanisch for assistance in operational environment. Finally we want to remember Günther Doms who initiated the project and suddenly and unexpectedly died in 2004. Several valuable hints came from J. Kain and G. Bryan and one anonymous reviewer.