1. Introduction
Mesoscale convective systems (MCSs) are groups of thunderstorms of length O(100) km in at least one direction (American Meteorological Society 2014). These predominantly summertime systems provide the Great Plains of the United States with much of their warm-season rainfall (Fritsch et al. 1986). A subset of these MCSs that contains bowing features, however, brings the risks of damaging winds, 2.5–5-cm (1–2 in.) hail, and flash flooding (Gallus et al. 2008). Conspicuous by their convex structure in radar reflectivity (Fig. 1), bow echoes and line echo wave patterns (LEWPs) are associated with some of the strongest nontornadic wind events in the plains, sometimes meeting derecho (damaging straight-line wind) criteria (Johns and Hirt 1987). A bowing structure often develops when stratiform precipitation behind a quasi-linear convective system lowers a rear-inflow jet through evaporative cooling and consequent negative buoyancy (Markowski and Richardson 2010). The cold pool accelerates as a result of the buoyancy gradient at its leading edge and is maintained by the jet through advection of drier air. Development of convective cells on the downshear side of the cold pool creates the distinctive bowing shape (Weisman 1993).
Observed NEXRAD composite radar reflectivity for the two cases found in the present study, merged over three times each. (a) The evolution of a single cell (2300 UTC) into a bow echo (0300 UTC) and, finally, into a bow-and-arrow structure (0600 UTC; note the arrow feature farther west), on 26–27 May 2006 (NEKS06). (b) The development of a linear MCS (2200 UTC) into a bow echo (0200 and 0600 UTC), on 15–16 Aug 2013 (KSOK13). States are labeled for reference (see Fig. 4 for context).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
Bow echoes and LEWPs, more often than other MCSs, are poorly simulated by numerical model forecasts (Keene and Schumacher 2013; Snively and Gallus 2014). Snively and Gallus (2014) found the 0–6-km shear was too weak, and the potential temperatures aloft too high, in their deterministic forecasts of bowing segments. The reduced skill of the model was usually related to simulation of the incorrect MCS mode. Snively and Gallus (2014) also surmised that simulations involving elevated convection may have performed the worst of those in the study. In two studies, Adams-Selin et al. found that performance of numerical simulations, both idealized (Adams-Selin et al. 2013a) and regarding an observed system (Adams-Selin et al. 2013b), were acutely sensitive to the chosen microphysical parameterization. Specifically, when graupel hydrometeors were simulated as lighter and greater in number (i.e., graupel-like rather than hail-like), they resulted in a stronger cold pool and rear-inflow jet, and hence the bowing initiated earlier. The chosen parameterization scheme also strongly affected the magnitude and areal coverage of the precipitation, system speed, and wind gusts. But it is unclear whether these findings can be applied generally when considering variations in the synoptic regime, initial condition (IC) dataset, and model configuration.
While numerical weather prediction (NWP) continues its march toward the explicit resolution of smaller and smaller convective features, there are a number of obstacles en route that may inhibit, or even preclude, successful numerical forecasts of bow echoes at a given lead time. Computer models are incomplete and imperfect: while smaller phenomena are resolved explicitly by ever-decreasing grid spacings, there will always be a scale below which wavelengths are truncated, and chaotic, nonlinear processes are implicitly resolved, or parameterized. Parameterization is used in operational NWP models, such as the North American Mesoscale (NAM) model and the Global Forecast System (GFS), to capture the planetary boundary layer (PBL), cloud microphysics, and other subgrid-scale processes. The “spread” of parameterization schemes, each with their own set of biases and random errors, interacts during a simulation without a priori knowledge of the impact on, for example, simulated radar reflectivity structures. In response to this, Adams-Selin et al. (2013b) called for schemes of opposing biases to be combined in operational mixed-physics ensemble systems. However, we cannot be sure that the biases shown in one study can apply generally to all regions, synoptic regimes, seasons, years, etc. For example, when changing the typical hydrometeor characteristics from graupel to hail, Van Weverberg et al. (2011) found increased surface precipitation amounts; in contrast, Gilmore et al. (2004) did not. To account for these biases a priori, Berner et al. (2011) trained their mixed-physics models over a number of months to determine the optimal configuration for spread and skill. This may not be a practical or general approach for operational centers to endorse long term, when one considers the training sensitivity to many factors and the frequent updates to NWP systems and parameterizations themselves.
In addition to model uncertainty, the atmosphere as a partly chaotic system is sensitive to IC uncertainty (Lorenz 1969); from this, Lorenz suggested a theoretical predictability horizon (Palmer et al. 2014 and references therein). When assuming purely chaotic (turbulent) flow, Lorenz estimated predictability to be limited to 1–2 h on scales of 10 km (Lorenz 1969). Fortunately from a forecasting standpoint, forecast models show that the atmosphere has inherent predictability at the mesoscale longer than that proposed by Lorenz. This may be due to known forcings that constrain the solution—high terrain, synoptic-scale fronts (e.g., Anthes et al. 1985)—and stable mechanisms that locally limit error growth, such as the helical flow in supercells (Lilly 1990), and in confluent, weak flow (Oortwijn 1998). In addition, limited-area model forecasts are constrained by (and sensitive to) their lateral boundary conditions (LBCs). Palmer et al. (2014) suggest that skillful forecasts beyond a given scale’s Lorenzian horizon may be possible because of the intermittent nature of chaos in the atmosphere (i.e., its regime dependency). In addition, they argue that Lorenz’s pessimistic estimates are due to the overly simplistic nature of the Lorenz-63 system (Lorenz 1963).
Unfortunately for MCS forecasts, moist convection is very destructive to predictability (Zhang et al. 2003). MCSs that form in the Great Plains even influence global model forecasts of blocking patterns downstream over Europe at the medium range through diabatic destruction of potential vorticity (Rodwell et al. 2013). In addition, diagnosis of substantially damaging IC error is fraught with difficulty as a result of both up- and downscale growth of errors (Durran and Gingrich 2014 and references therein). Notably, the use of coarse-grid IC/LBC datasets to drive convection-allowing ensemble simulations may result in insufficient variance on convective scales (e.g., Schwartz et al. 2014 and references therein); IC perturbations from a global model do not include variance below its truncated scale. Errors first propagate downscale and saturate before growing upscale (Durran and Gingrich 2014). Hence, there is a delay in small-scale variance growth, which impacts particularly the first 6 h of a numerical simulation (Kühnlein et al. 2014), and can yield an underdispersive ensemble (Romine et al. 2014 and references therein).
To address these problems and better sample the spectrum of possible outcomes of the model atmosphere, many forecast centers use a number of different numerical simulations [ensemble forecasts; Leutbecher and Palmer (2008)]. There are different ways of creating members that differ from their control: through mixed-parameterization configurations (e.g., Stensrud et al. 2000), through perturbed ICs and LBCs (e.g., Romine et al. 2014), through multiple NWP dynamical cores or models (e.g., Hagedorn et al. 2012), etc. Recently, studies have yielded a method to inject energy (which may be erroneously dissipated in the model between the resolved and unresolved scales) into the simulation to better account for model error (Shutts 2005). This so-called stochastic kinetic energy backscatter (SKEB) scheme has been shown to improve ensemble spread and ultimately provide a more skillful ensemble mean than a mixed-physics approach (Duda et al. 2016), except at the surface (Berner et al. 2011). Furthermore, when a SKEB scheme was combined with a mixed-parameterization configuration by Berner et al. (2011), the performance was even better. As of version 3.7, WRF parameterizations are deterministic in nature; a stochastic approach is potentially a better way to account for the model error (Palmer 2001). Ensemble forecasts are not only useful for operational centers, but also can provide a larger corpus of “alternative realities” in which to seek the sensitivity of atmospheric phenomena during posterior investigation (e.g., Hanley et al. 2013).
To address the issue of why bowing structures are often more poorly forecast than other MCS modes, and while not exhaustive or mutually exclusive, we propose four hypotheses:
bow echoes are inherently less predictable features, perhaps because of the microscale destruction of predictability within the bowing feature itself;
bow echoes are embedded in less predictable synoptic-scale regimes;
there is a critical deficiency in ICs and LBCs within simulations and forecasts; and
there is a critical deficiency in the subgrid-scale processes of the microphysics parameterizations.
Bow echoes are an extreme phenomenon in both rarity and severity, and their specifically local risks (strong wind, flash flooding) do not lend themselves to the smoothing of ensemble means. In this case, choosing the member closest to the ensemble mean (Ancell 2013), perusal of postage-stamp plots, or generating the probability of threshold exceedance (Schwartz et al. 2015), is more useful for forecaster interpretation (e.g., Gallus et al. 2016). Rather than focusing on ensemble means or skill-score statistics, the present study will investigate the visual spread of convective mode and radii of curvature in simulated reflectivity, with a secondary focus on surface wind magnitude, coverage, and exceedance probabilities. Note that bowing structures can occur in two ways: those that appear multiple times along a quasi-linear convective system, typically in parallel with a front [often resulting in serial derechos; Johns and Hirt (1987)], and those that are less strongly forced by a large-scale boundary, whose bowing radius of curvature is similar to the size of the system itself (progressive derechos). [There is no differentiation between either type in Snively and Gallus (2014).] Motivated by the wish to concentrate on the more flexible criteria of radar reflectivity signatures, rather than strict (and more arbitrary) surface wind definitions of a derecho, the present study refactors this terminology to look at progressive bow echoes.
We will first outline various IC/LBC datasets and model configurations in section 2. The synoptic settings of two progressive bow echoes are presented in section 3. The two cases are contrasted through the use of four ensemble configurations. The configuration with perturbed ICs/LBCs (section 4) accounts for uncertainty in the constraining atmospheric-state data. The configuration with mixed-microphysics parameterizations (section 5), and two involving SKEB schemes with and without mixed microphysics (section 6), account for model and parameterization uncertainty. The results are synthesized and concluded in sections 7 and 8, respectively, along with discussion of future work, and how the performance of all ensembles is interpreted regarding bowing-structure predictability horizons.
Note, in the present study, we refer to variance between the ensemble members as spread or uncertainty interchangeably. This is distinct from error, which hereby is the difference between observations and a dataset, deterministic simulation, or ensemble mean (or a “mean-like” interpretation for noncontinuous quantities like reflectivity).
2. Data and methods
The present study focuses on two progressive bow echoes: an eastward-moving system along the Nebraska–Kansas border on 26–27 May 2006, and a southward-moving system that crossed Kansas, Oklahoma, and Texas on 15–16 August 2013. The two cases will be hereby termed NEKS06 and KSOK13, respectively. The former was chosen as one of the poorest simulations in Snively and Gallus (2014); the latter was chosen for contrast as a result of good performance in multiple preliminary simulations. The contrasting synoptic scenarios for both cases (cf. Figs. 2 and 3) also motivated their inclusion. These are described further in section 3.
Geopotential height fields from RUC analysis at 500 hPa (black) and 925 hPa (lavender), contoured every 60 and 30 m, respectively, and valid at 1200 UTC 26 May 2006 (NEKS06, day 1). Stationary surface front denoted by red/blue line, and low MSLP center marked by red L (both adapted from WPC synoptic analyses). Green star denotes convective initiation of the MCS of interest at 2200 UTC. Green arrow denotes approximate movement of the MCS.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
As in Fig. 2, but valid at 1200 UTC 15 Aug 2013 (KSOK13, day 1), with convective initiation at 2200 UTC.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
All numerical simulations were run on the same supercomputer system at Iowa State University to avoid introduction of rounding-error contamination. The simulations were performed with version 3.5 of the Weather Research and Forecasting (WRF) Model (Skamarock et al. 2008), using the Advanced Research dynamical core. The control parameterization configuration (Table 1) was chosen primarily for its demonstrated stability on the Iowa State supercomputers, due to the large number of ensemble runs required with this configuration. The control microphysical parameterization (Thompson) was also selected because of its good performance in similar studies (e.g., Snively and Gallus 2014; Romine et al. 2014). The constant domain size was 451 × 451 points with horizontal grid spacing
Control parameterization schemes used in the numerical modeling configuration.
WRF domains for NEKS06 and KSOK13. Labels refer to U.S. states mentioned in text.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
Depending on the ensemble experiment, the ICs and LBCs were provided by one (or all) member(s) of the 11-member Global Ensemble Forecast System Reforecast dataset (GEFS/R2; Hamill et al. 2013), or NAM analyses archived at the National Operational Model Archive and Distribution System (http://www.ncdc.noaa.gov/data-access/model-data/model-datasets, accessed 1 January 2015) (12-km horizontal grid spacing; 40 vertical levels). We used the limited GEFS/R2 dataset (1° horizontal resolution; 12 vertical levels), readily available online, instead of the full dataset (0.5° horizontal grid spacing and 42 vertical levels). As the limited GEFS/R2 dataset does not contain sufficient resolution in soil layers for the WRF to run as is, GFS analyses of soil temperature and moisture were prescribed for each batch of ICs and LBCs [see Lawson (2013) for further information on this method]. While small changes in variables such as soil temperature can affect the convective initiation (Clark and Arritt 1995), the absence of perturbations in the soil variables was assumed to not preclude useful relative conclusions. The limited GEFS/R2 dataset performed well in preliminary tests and provides an interesting contrast to the WRF initialization from the higher-resolution NAM dataset. LBCs were interpolated to, and specified, every 3 h from the same dataset as the ICs. Hence, analyses provided LBCs for GFS- and NAM-based simulations, and forecasts provided LBCs for GEFS/R2-based simulations. All runs were initialized on 0000 UTC on the first day of the case study, and ran for 36 h, to allow 1) mesoscale systems to develop appropriately; 2) perturbations between ensemble members to grow large enough to observe easily, but not so large that the time scale of interest was well beyond a predictability horizon for meso-α-scale motion (Surcel et al. 2014); and 3) use of the once-daily GEFS/R2 data. All MCSs of interest had at least 18 h between model initialization and convective initiation. Preliminary tests, using NAM analyses, were started 12 h earlier and later and did not improve the simulation performance. Datasets from the Rapid Update Cycle, and its successor Rapid Refresh (both hereby referred to as RUC), were used for synoptic overviews, and to supplement observations when initially evaluating model performance. However, for the focus of the present study, we verify model performance with composite NEXRAD level III radar reflectivity data from archives at Iowa State University (https://mesonet.agron.iastate.edu/docs/nexrad_composites/, accessed 1 September 2015). Base reflectivity product data are composited through the GEMPAK program nex2img, after which suspected false echoes are removed through comparison with the Net Echo Top product. We also compared WRF 10-m wind output to National Climatic Data Center [NCDC; now known as the National Centers for Environmental Information (NCEI)] storm reports (https://www.ncdc.noaa.gov/stormevents/, accessed 1 September 2015), with the caveat that these reports can occasionally exaggerate or diminish the actual wind strength (Trapp et al. 2006).
Multiple ensemble types (experiments) were created (Fig. 5); those discussed in the following study are listed in Table 2 with their abbreviation and formulation. ICBC ensembles were created by running WRF 12 times, each with a different set of ICs and LBCs from the GEFS/R2 dataset (1 control and 10 perturbation members) and NAM analyses. Note the GFS-driven simulations provided little variation to the NAM and GEFS/R2 datasets and will not be discussed further in the present study. These ICBC runs used the control configuration (Table 1); hence, Thompson is the only microphysical scheme used (ICBC-Thompson). Ensembles were also created by varying the microphysical scheme (MXMP), while holding ICs/LBCs and all else constant. The nine microphysical schemes (including the control scheme, Thompson; Table 3) were chosen to mirror a similar study by Adams-Selin et al. (2013b). In their method, the hydrometeor intercept (their Fig. 2) of graupel was modified in the WRF source code (R. D. Adams-Selin 2015, personal communication), so that a parameterization could become “hail-like” or “graupel-like.” The smaller intercept used in the hail-like modification results in hydrometeors that are larger and denser, and that fall faster; the opposite is true for the graupel-like results. An identical method has been used in the present study for the WSM6, WRF Double-Moment 6-class (WDM6), and Morrison schemes to improve the sampling of model error phase space, resulting in 12 MXMP members. These variations are hereby denoted by “Hail” and “Graupel” (e.g., WSM6 Hail). As a caveat to the MXMP method, each member is not of equal likelihood in the same sense as a well-calibrated ensemble. Hence, this ensemble method is more correctly a sensitivity study, and does not rigorously measure predictability. However, it can offer insight into the performance of each parameterization scheme. To further sample the model uncertainty, three more ensemble experiments are used involving a SKEB scheme (e.g., Berner et al. 2011). The SKEB scheme accounts for energy lost between resolved and unresolved scales by randomly1 injecting kinetic and potential energy back into the model fields. STCH prescribes a constant IC/LBC dataset and parameterization. STMX couples a SKEB scheme with the same list of microphysical parameterizations as in MXMP.2 Finally, the sensitivity of the STCH method was tested by changing the decorrelation time of temperature and streamfunction perturbations from the default 0.5 to 5.5 h: this variation is called STCH5.3 As the kinetic energy spectrum in WRF contains the
Schematic diagram showing the methodology of creating ensembles. The green boxes on the left mark the four experiments; the control and perturbation members are colored blue for IC/LBC perturbations (ICBC), yellow for different microphysical schemes (MXMP), red for SKEB perturbations (STCH uses 0.5-h decorrelation time; STCH5 uses 5.5 h), and orange for a combination of microphysical scheme variations and SKEB perturbations (STMX). Arrows follow example paths down the “family tree” of ensembles.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
Microphysical schemes used in the MXMP experiments.



Also note that the simulated composite reflectivity only includes rain and snow hydrometeors in the following figures to enable comparison over all MXMP members. In preliminary testing, this reflectivity was compared to that computed using all hydrometeor species available for each parameterization, and did not substantially affect the conclusion of MCS mode. In fact, reflectivity from all species tended to heavily overestimate reflectivity associated with stratiform precipitation. As a result of using this method (and considering the warm-rain-only nature of the Kessler scheme), we do not analyze intermember magnitudes of reflectivity in the present study.
3. Synoptic overviews
a. NEKS06
The progressive bow echo of 26–27 May 2006 (NEKS06; Fig. 1a) is covered in more detail in Snively and Gallus (2014), where the authors found WRF runs forced by both NAM and GFS forecast datasets incorrectly reproduced the convective mode of the MCS. They also found little sensitivity to the microphysical schemes. Regarding this case, 26 and 27 May will be referred to as day 1 and day 2, respectively.
Figure 2 shows RUC analyses of 500- and 925-hPa geopotential heights, surface frontal positions, and their associated mean sea level pressure (MSLP) minimum, at 1200 UTC on day 1. The green star and arrow denote the location of convective initiation (2200 UTC) and the subsequent MCS movement (eastward), respectively. At 1200 UTC on day 1, the Nebraska–Kansas border sits underneath the entrance region of a small southwesterly 250-hPa jet maximum of 50 kt (where 1 kt = 0.51 m s−1), visible in the rawinsonde data (not shown) and underneath the axis of a synoptic-scale ridge evident in 500-hPa heights (Fig. 2). Winds become more southerly toward the surface; at 925 hPa, a weak height trough lies along the Nebraska–Kansas border. At the surface, a quasi-stationary warm front, as analyzed by the Weather Prediction Center (WPC), stretches through Kansas (Fig. 2). Its associated MSLP minimum in southeast Colorado lies close to the location of the convective initiation 10 h later. This synoptic setup and event evolution, with the MCS of interest moving parallel to a zonal surface front, is similar to Fig. 4 in Bentley et al. (2000), associated with derechos.
Figure 1a presents the observed composite radar reflectivity at 2300 (day 1), 0300 (day 2), and 0600 UTC (day 2). Convective initiation of interest occurs at 2200 UTC on day 1. The cell strengthens in reflectivity intensity, and the mode becomes linear by 0000 UTC on day 2. While the system continues growing upscale at the beginning of day 2, the formation of a discrete bowing line is rather sudden between 0200 and 0300 UTC. NCDC storm reports associated with this MCS include hail 2–2.5 cm (0.75–1 in.), wind gusts up to 36 m s−1 (70 kt), and a landspout tornado. Between 0400 and 0500 UTC, a second line of moist convection initiates northeast of the first MCS. By 0600 UTC, these two lines of convection form a disconnected arc; a third line of moist convection perpendicular to the arc’s tangent forms in the wake of the primary bowing segment, in a “bow and arrow” structure (Keene and Schumacher 2013). The two leading arc segments merge by 0800 UTC as the system moves into western Iowa and northwestern Missouri. The system weakens in reflectivity as it continues to move east but still produces hail that is close to 2.5 cm (1 in.) in size in Iowa.
b. KSOK13
The progressive bow echo of 15–16 August 2013 (KSOK13; Fig. 1b) brought damaging wind and hail to Kansas, Oklahoma, and Texas. The dates 15 and 16 August are referred to as days 1 and 2 for this case, respectively. In contrast to the midtropospheric west-to-southwest winds in NEKS06, the area of interest at 1200 UTC on day 1 lies under northwesterly flow at 500 hPa (Fig. 3), between an upstream ridge and a downstream trough. Winds become weaker and more northerly close to 700 hPa (not shown) and are variable and light at 925 hPa. At the surface, a weak frontal wave (analyzed by the WPC) straddles an MSLP minimum near the Nebraska–Kansas border. This zonally oriented quasi-stationary front slowly migrates south, and initiation (green star in Fig. 3) occurs to the north of this boundary at around 2200 UTC. Storm Prediction Center mesoscale discussions for this day mention a prior mesoscale convective vortex (MCV) moving southward, and this is evident in visible satellite data (not shown). The southeastern (downshear; 0–6-km vertical wind shear not shown) edge of this MCV appears to focus moist convection, similar to that seen in idealized simulations by Davis and Weisman (1994). This convection then forms a line by 2200 UTC (Fig. 1b) and begins bowing at 2330 UTC. The line produces a swath of strong wind (up to 34 m s−1 or 67 kt) and large hail (up to 4.4 cm or 1.75 in.) primarily in central Kansas and near the Oklahoma–Texas border.
4. ICBC experiments
This section details the results from ensemble simulations that use IC and LBC perturbations from the GEFS/R2 dataset. Note that the NAM-driven member for each case is included in section 5 as the control member of the NAM-MXMP experiment. All ICBC experiments use the control (Thompson) microphysics parameterization.
a. NEKS06
No ICBC-Thompson members simulate any substantial reflectivity structures in the region of interest during the first 33 h (not shown); hence, the verification (observed convection) falls well outside the envelope of the ICBC-Thompson simulation. There is strong agreement between ICBC-Thompson members regarding frontal location (not shown), but as this consensus position is incorrect in comparison with the observations, it suggests inadequate dispersion in the limited GEFS/R2 dataset.
b. KSOK13
The first 21 h of this case are simulated poorly by ICBC-Thompson, with moist convection occurring in locations different from that observed; however, the performance improves thereafter. At 2100 UTC on day 1, a line of cells is observed in the reflectivity data over north-central Kansas; in the ICBC-Thompson members, there is a large spread of solutions in cell evolution (see Fig. S1 in the supplemental material online). At 0000 UTC on day 2 (Fig. S2), eight members have line segments, seven of which have begun bowing; the three remaining members form two regions of cells. Three hours later (0300 UTC; Fig. 6), the observed bow echo has its tightest radius of curvature. In ICBC-Thompson, 10 members have a bowing line, but the locations vary from the Nebraska–Kansas border (p04, p08) and central Kansas (p02, p07) to the Kansas–Oklahoma border, the location of the observed bow echo (c00, p01, p03, p05, p09, p10). The last member simulates a straight line in the correct location (p06), but soon after develops bowing.
(a) Observed and (b)–(l) simulated composite reflectivity for GEFS/R2 members of ICBC-Thompson, valid 0300 UTC 16 Aug 2013 (KSOK13, day 2).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
In summary, the location of the initiation and the modes of initial convection do not necessarily predict the resulting simulated system’s location and strength. In other words, there is not a “nonreversible” bifurcation of solution clusters; despite the solution of a bow echo being simulated by all 11 members, some members follow different trajectories en route. The bowing structure is a stable solution despite the high sensitivity of the location, timing, and intermediate mode to the IC/LBC perturbations. In addition, prior (day 1) convection was not correctly simulated, but did not preclude the formation of the correct mode, timing, and locations of the bow echo in many of the ensemble members.
Integrating DTE vertically shows that, at 0300 UTC on day 1, the uncertainty is larger in two general areas (Fig. 7a): 1) locations with moist convective activity in simulated radar reflectivity, where DTE growth is expected to be larger (Zhang et al. 2003), and 2) along the MSLP trough running west–east in Nebraska. Over the next 6 h, another DTE maximum is associated with the developing MCV (Fig. 7b). At 1800 UTC on day 1, there is increasing homogeneity in the domain-wide DTE field as the moist convection dissipates (Fig. 7c). Yet the local maximum of DTE associated with the MCV stands out from this background field; at 1800 UTC, moist convection initiates on the southeastern (downshear) side of the MCV both in the observations and in most of the simulations. Over the next 12 h, small variations in the location and timing of this convective initiation appear related to differences in the structure of the MCV between ensemble members (Fig. 7d). Eventually, these small intermember differences grow to become large (>5000 m2 s−2) DTE values, while almost all members generate a bow echo that moves southward through Kansas and Oklahoma, but in a spread of locations with variations in bowing structure (Fig. 6). We see that, in contrast to NEKS06, the GEFS/R2 dataset provides substantial differences in KSOK13 related to the development of convection. However, the mode solution (i.e., a bow echo) appears to be highly predictable, even if the location and specifics of the bowing are more uncertain.
Evolution of DTE (m2 s−2) in the GEFS/R2 members of ICBC-Thompson, for the KSO13 case, valid at (a) 0300, (b) 0900, and (c) 1800 UTC 15 Aug 2013 and at (d) 0000 UTC 16 Aug 2013.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
5. MXMP experiments
This section details the results from numerous mixed-microphysics ensemble simulations, forced with either a NAM-analysis dataset or a given ensemble member of the GEFS/R2 dataset.
a. NEKS06
Results from c00-MXMP showed poor performance and almost no moist convection during the MCS of interest (not shown), no matter what microphysical scheme was used, in line with ICBC-Thompson results. It is likely the lack of moist convection is general regardless of the GEFS/R2 perturbation member used to drive the MXMP experiment, as a result of insufficient variation between GEFS/R2 members. The c00-MXMP experiment has considerably less spread than ICBC-Thompson (discussed in section 7); DTE calculated between a given parameterization and all others (not shown) reveals almost identical DTE growth between most of the microphysical schemes in this experiment. This reduced uncertainty between microphysical schemes is likely due to the limited amount of moist convection that does not permit spread to grow rapidly through variations in the microphysical parameterizations. However, it is still surprising that the c00-MXMP spread is not comparable to that in ICBC-Thompson: Stensrud et al. (2000) found larger variation with varied convective and PBL parameterizations in the first 12 h than variation using perturbed ICs and LBCs. This suggests that, in certain flows with a fixed set of ICs/LBCs, erroneously low ensemble spread cannot be mitigated through parameterization variability alone.
The NAM-MXMP experiment also begins poorly and does not capture the upscale growth of convection into a line of cells in southern Kansas in the first few hours of the simulation (not shown). As an improvement on c00-MXMP, most members do initiate a northwest–southeast line of convection across Kansas by 0800 UTC on day 1. The analogous feature in the observations initiates later on day 1 (1000 UTC) and is orientated NNW–SSE. This suggests that the position of the front may be manipulated by earlier warm-sector convection and subsequent upscale growth of the convective mode, and that accounting for model error is critical to correctly modulate the larger-scale baroclinic boundaries. By 0200 UTC on day 2, cells grow, move northeastward, and grow upscale in both the observed and model data, but no ensemble members recreate the bow echo and subsequent turning of the system to the east-southeast as it lengthens in scale. At this point, 26 h into the simulation, all ensemble members appear to critically diverge from the verification. The closest member at 0600 UTC on day 2, by eye, uses the WDM6 Graupel scheme (Fig. 8j), but its simulated bow echo never turns to the east-southeast, and instead continues moving northeast to merge with another linear feature at the Iowa–Nebraska–South Dakota borders. This ~45° error in MCS trajectory is likely related to a comparable error in midtropospheric wind direction (e.g., 500-hPa model winds; not shown) between RUC analyses and both GEFS/R2- and NAM-driven ensemble members, as in Snively and Gallus (2014). This error in storm motion appears to be critical by taking the developing MCS away from the frontogenesis maximum (which is correctly placed in NAM-MXMP members; not shown), and attendant convergence and positive equivalent potential temperature advection originating in the warm sector. The source of such error in large-scale flow is likely to be in ICs and LBCs, which are fixed in MXMP experiments.
(a) Observed and (b)–(m) simulated composite reflectivity for NAM-MXMP ensemble members, valid 0600 UTC 27 May 2006 (NEKS06, day 2).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
b. KSOK13
For KSOK13, we first fix ICs and LBCs using the subjectively best ICBC member (p09; see Fig. 6k) to test the sensitivity of a subjectively good simulation to the choice of parameterization (p09-MXMP). By 0300 UTC on day 2 (Fig. 9), all p09-MXMP members create a progressive bow echo with a tight radius of curvature as in the control (Thompson), with the exception of the Morrison (both hail and graupel) members (Figs. 9l,m).
(a) Observed and (b)–(m) simulated composite reflectivity for p09-MXMP ensemble members, valid 0300 UTC 16 Aug 2013 (KSOK13, day 2).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
Conversely, while p09-MXMP members resembled observed reflectivity structures, NAM-MXMP members did not. Simulated reflectivity from NAM-MXMP shows much variation between members on day 1, including swaths of convection in Kansas and Oklahoma that are not observed (Fig. S3). At 2100 UTC, after a lull in the moist convection, a completely different solution from p09-MXMP unfolds (Fig. S4): a southwest–northeast boundary triggers a line of cells across the Nebraska–Kansas border. By 0000 UTC, NAM-MXMP members display a variety of solutions, some with bowing segments along broken lines. Overall, convection is more scattered and disorganized than in p09-MXMP. By 0300 UTC (Fig. 10), all members have a similar theme: a south-southwest–north-northeast broken or unbroken line, with or without bowing sections embedded within the line (some resembling a serial bow echo). The simulated MCS locations are from the Texas and Oklahoma panhandles toward central Kansas. This is much different from the tightly curved bow echo observed at the Kansas–Oklahoma border. The WDM6 Graupel member maximizes the 10-m wind magnitude and areal extent (not shown). This corresponds with the prominent bowing structure in the simulated reflectivity, typically associated with the rear-inflow jet and damaging surface winds (Przybylinski 1995; Markowski and Richardson 2010), associated with this member (Fig. 10j).
(a) Observed and (b)–(m) simulated composite reflectivity for NAM-MXMP ensemble members, valid 0300 UTC 16 Aug 2013 (KSOK13, day 2).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
6. STMX and STCH experiments
In this section, results from SKEB ensembles (with and without fixed microphysics) are detailed, including the STCH5 variation, using both GEFS/R2 and NAM datasets.
a. NEKS06
The addition of a SKEB scheme to a MXMP configuration changes the mode, strength, or location of the convection to varying degrees. Figure 11 presents three microphysical schemes without (NAM-MXMP) and with (NAM-STMX) a SKEB scheme, valid at 0600 UTC on day 2. The three parameterizations (Morrison Graupel, Morrison Hail, and Ferrier) are discussed here for their varying sensitivities to the SKEB scheme. Contrast the Morrison Graupel without and with SKEB (Figs. 11a,b), particularly the split in the latter of the convective line near the South Dakota–Nebraska border. Interestingly, a discrepancy of this magnitude does not occur in the Morrison Hail member, despite the single change in hail–graupel dynamics (Figs. 11c,d). Next, likewise contrast Ferrier without and with SKEB (Figs. 11e,f). In this case, addition of the SKEB scheme changes the orientation of the linear convection.
The sensitivity of three microphysical schemes in NEKS06 to the hail/graupel fall speed and addition of a SKEB scheme, taken from (left) NAM-MXMP (also in Fig. 10) and (right) NAM-STMX members: (a),(b) Morrison Graupel, (c),(d) Morrison Hail, and (e),(f) Ferrier parameterizations. Figures valid at 0600 UTC 27 May 2006 (day 2). Color bar in dBZ.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
The NAM-MXMP member closest to the observed bow echo reflectivity (WDM6 Graupel; Fig. 8j) changes very little with the addition of a SKEB scheme in the NAM-WDM6Graupel-STCH experiment (not shown). The control member (i.e., without SKEB) is fairly representative of NAM-WDM6Graupel-STCH members at 0500 UTC on day 2 (Fig. S5). In addition, c00-Thompson-STCH does not improve on the poor simulation seen in the GEFS/R2-driven ICBC and MXMP experiments. When contrasting GEFS/R2-driven and NAM-driven STCH experiments, we note the dependence of spread on the IC/LBC set chosen (e.g., Alhamed et al. 2002). DTE growth in NAM-WDM6Graupel-STCH follows a similar evolution to NAM-MXMP (day 1 moist convection is present), whereas DTE growth in c00-Thompson-STCH is closer to ICBC-Thompson and c00-MXMP (without day 1 moist convection).
This apparently random impact of SKEB perturbations on the precipitation structure matches speculation by Romine et al. (2014) that such variation in a 3-km SKEB ensemble simulation “may be a common pattern.” An increase in decorrelation time from 0.5 to 5.5 h (NAM-WDM6Graupel-STCH and NAM-WDM6Graupel-STCH5, respectively) reduces the overall spread, but the DTE field is structurally similar (not shown). Note the SKEB perturbation seeds are identical between the STCH and STCH5 experiments. The reduction in domain-wide spread also does not substantially decrease the magnitude of the DTE local maximum embedded within the low-DTE region. This further associates the high sensitivity of bow echo structures with small perturbations, even when large-scale uncertainty is reduced.
b. KSOK13





The sensitivity of the microphysical scheme to SKEB may be substantially changed by changing the hail/graupel coefficient. This is also seen in Fig. 11. The introduction of SKEB into the graupel variation of WSM6 (the two top-row panels in each six-panel frame) in Fig. 12 straightens the line somewhat (cf. Figs. 12a,b), and weakens winds considerably at both 850 hPa (Figs. 12g,h) and 10 m (Figs.12m,n). The surface-based cold pool is not noticeably weaker, however (cf. Figs. 12s,t). When the hydrometeors are more hail-like (middle rows), there is much less variation in all fields presented here between the no-SKEB and with-SKEB simulations. As the SKEB perturbations vary with each member (and simulation initialization time), we cannot make general conclusions about a parameterization’s performance or sensitivity to small perturbations. However, this itself is an important consideration when assessing a parameterization within an ensemble that accounts for model error.
An increase in bowing radius—a weaker bow echo signal—may not be associated with weaker 850-hPa and surface wind magnitudes. In Fig. 12, the addition of a SKEB scheme to WDM5 weakens the bowing signal (Figs. 12e,f) and the rear-inflow jet (Figs.12k,l), but the peak 10-m wind magnitude increases (Figs. 12q,r). Conversely, the less impressive bow in Figs. 12a and 12b (WSM6 Graupel), after SKEB is introduced, is associated with weaker winds at both 850 hPa (Figs. 12g,h) and the surface (Figs. 12m,n). While these figures are a small sample, this lack of a consistent relationship between bowing curvature and surface wind in the simulations was noticed in different ensemble members, and was noted by Wandishin et al. (2010) in their own simulations.
The sensitivity of three microphysical schemes in KSOK13 to the hail/graupel fall speed and addition of a SKEB scheme, using GEFS/R2 p09 ICs/LBCs. The fields shown are (a)–(f) simulated composite reflectivity, (g)–(l) 850-hPa wind, (m)–(r) maximum 10-m wind over 20 min, and (s)–(x) density potential temperature perturbation, valid 0300 UTC 16 Aug 2013 (day 2). Colors and units are denoted in the legend. Members without SKEB (i.e., p09-MXMP) are on the left of each group of six panels; those with SKEB (i.e., p09-STMX) are on the right. The three microphysical schemes are WSM6 Graupel (top rows of each panel), WSM6 Hail (middle rows), and WDM5 (bottom rows). Note the simulated reflectivity MXMP members in (a),(c), and (e) are also shown in Fig. 9.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
We now assess the contrasting performance of the subjectively “best” (Thompson) and “worst” (Morrison Hail) p09-MXMP members with STCH experiments. All members contain bowing in the p09-Thompson-STCH experiment at the time of maximum curvature (0300 UTC on day 2; Fig. 13), though the radius of curvature varies between the members. Notably, the control (i.e., no SKEB scheme) is the best member of this experiment. The other members have similar or less bowing in their simulated systems, suggesting that the initial subjectively best performance of Thompson was partly fortuitous, or that a SKEB scheme degrades the forecast. When we look at the same time for p09-MorrisonHail-STCH (Fig. 14), there is a wider spread in solutions, some of which are as close to verification as the p09-Thompson-STCH members. Some members generate two separate bowing segments; others are similar to the control. This shows that the Morrison Hail parameterization’s worst performance in p09-MXMP was again through insufficient sampling of model phase space. Note as SKEB members in p09-MorrisonHail-STCH outperformed the control, SKEB is unlikely to be systematically degrading forecast skill; however, the limited sample size precludes general statements. The low DTE magnitude in these STCH ensembles compared to the other experiments (discussed in section 7) is related to even more spatial agreement, but only slightly less variation in MCS structure. Maximum 10-m wind over the period of the bow echo (not shown) shows that this variation also affects the locations of surface wind maxima, perhaps associated with downbursts within the bow echo. However, within the simulations, variation in structure is not a reliable predictor of surface wind magnitude (as seen in Fig. 12). Surface wind is discussed further in section 7.
(a) Observed and (b)–(l) simulated composite reflectivity for p09-Thompson-STCH ensemble members, valid 0300 UTC 16 Aug 2013 (KSOK13, day 2).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
(a) Observed and (b)–(l) simulated composite reflectivity for p09-MorrisonHail-STCH ensemble members, valid 0300 UTC 16 Aug 2013 (KSOK13, day 2).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
7. Synthesis
a. Ensemble uncertainty
Figure 15 shows time series of DTE integrated over all three spatial dimensions for seven NEKS06 experiments: ICBC-Thompson,5 NAM-MXMP, NAM-STMX, c00-MXMP, c00-Thompson-STCH, and NAM-WDM6Graupel-STCH and -STCH5. In NAM-driven experiments, DTE decreases to a local minimum at around 1800 UTC, likely as the disturbed air is advected out of the domain, and as more quiescent flow enters (regression to the ensemble mean). Despite large areas of radar reflectivity across the domain (not shown), the precipitation is larger in scale and less intense, and less destructive in terms of predictability. DTE rapidly grows after this, the time of maximum solar insolation (~1800 UTC), on day 1. This is likely related to the onset of cellular convection at this time and the accompanying destruction of predictability (Zhang et al. 2003). There is little difference in spread between the NAM-MXMP and NAM-STMX experiments (Fig. 15), showing negligible overall impact of the SKEB scheme with default parameters to uncertainty. Uncertainty growth in the overnight (0300–1200 UTC) periods for both days appears to be strongly dependent on the occurrence of moist convection; the GEFS/R2-based experiments that struggle to initiate moist convection do not have as pronounced bimodality in DTE.
Domain-integrated DTE (m2 s−2 × 108) for multiple experiments, labeled in the top left in descending order of uncertainty at 0600 UTC on day 2, for the NEKS06 case. Colors roughly follow those used in Fig. 5. Day and hour shown along x axis in calendar day–UTC hour format.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
Likewise, Fig. 16 shows time series of DTE for six KSOK13 experiments: ICBC-Thompson, p09-MXMP, NAM-MXMP, p09-STMX, p09-Thompson-STCH, and p09-MorrisonHail-STCH. Ensemble uncertainty is comparable in magnitude between NEKS06 and KSOK13 (cf. Figs. 15 and 16). Similarly to NEKS06 (Fig. 15), KSOK13 displays a twin-peak structure of DTE, with maxima around midnight local time (around forecast hours 6 and 30). This is again likely related to moist convection during the peaks. Note, in contrast to NEKS06, that ICBC-Thompson has the largest domain-wide DTE, followed by the STMX, MXMP, and STCH experiments. The lower diversity in NEKS06 ICBC-Thompson is likely related to the lack of convection associated with GEFS/R2 ICs/LBCs. The better performance of KSOK13 matches previous findings that ensemble skill is largest when IC/LBC uncertainty dominates model uncertainty (Murphy 1988). Also note that NAM-MXMP has larger DTE than p09-MXMP, but a worse forecast (in contrast to NEKS06, where the badly performing experiment had less DTE). A lack of relationship between spread and skill was found in Berner et al. (2011), though a loose relationship was found in Buizza (1997). DTE growth results between the two p09-driven STCH experiments, using Morrison Hail and Thompson microphysics, are similar up to 2100 UTC on day 1. After this, the spread grows faster in the Morrison Hail member. This corroborates the larger spread, by eye, of modes in simulated reflectivity (cf. Figs. 13 and 14).
As in Fig. 15, but for the KSOK13 case.
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
Figure 17 shows the vertically integrated DTE for a collection of experiments in the KSOK13 case, at 0000 UTC on day 2, shortly after the MCS of interest has initiated in most ensemble members in all experiments. The panels are in descending order of domain-wide DTE; this is generally seen as a diminishing area of low DTE (blue colors) through the panels. The DTE maximum associated with the simulated MCS is centered in a broad region of low DTE (<2000 m2 s−2), but still exceeds 6000 m2 s−2 in all members. As the spread of the MCS’s positioning and timing reduces through the pyramid of experiments, MCS modes still vary between straight and bowing lines (cf. Figs. 6, 9, 10, 13, and 14 at 0300 UTC on day 2). As DTE is integrated vertically here for each grid point, and as ensembles reach more consensus on the MCS position, DTE generation is concentrated on a smaller area. We do not see a reduction in the local maximum around the MCS, as might be expected with a consensus of position. Hence, the bowing structure is associated with uncertainty (high DTE) on small (~10 km) scales, as expected (Lorenz 1969), with the caveat that no causation is implied between DTE and variance in reflectivity. (It is not apparent whether ensemble spread is creating diversity in MCS mode, or vice versa.)
DTE (m2 s−2), integrated vertically, at 0000 UTC 16 Aug 2013 (KSOK13, day 2), for multiple experiments: (a) GEFS/R2 members of ICBC-Thompson, (b) p09-STMX, (c) NAM-MXMP, (d) p09-MXMP, (e) p09-MorrisonHail-STCH, and (f) p09-Thompson-STCH. Panels in descending order from (a) to (f) of domain-integrated DTE at this time (cf. Fig. 16).
Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0060.1
b. Simulated 10-m wind
The simulated wind speeds associated with the bow echo in KSOK13 were too low in general across all experiments (e.g., 10-m wind for KSOK13 shown in Fig. 12). The underestimation may come from the calculation of the WRF 10-m wind output, which uses Monin–Obukhov similarity theory. Wind speed output explicitly at the lowest model level (~40 m above ground level) was close to the peak observed speeds during the KSOK13 bow echo event: around double the speed inferred at 10 m (not shown). Strong winds exist throughout the near-surface levels (some areas perhaps associated with the rear-inflow jet; Figs. 12m–r). It is not clear if the underestimation at 10 m is due to an invalid 10-m computation, model error in the fixed PBL scheme [Mellor–Yamada–Nakanishi–Niino (MYNN) level 2.5], or simply inadequate sampling of model error phase space. Regarding the latter, while the present study varies only the microphysics parameterizations—likely the largest source of model error—a fixed PBL and surface-layer scheme will limit the spread of surface wind forecasts.
We also note the control (i.e., no-SKEB) member of p09-Thompson-STCH had the weakest winds associated with the KSOK13 bow echo. A similar result is discussed in Lawson and Gallus (2016, manuscript submitted to Mon. Wea. Rev.), where bow echoes moved faster in SKEB ensemble members versus control members, and may be related to the extra (missing) kinetic energy introduced by the SKEB scheme.
c. Sensitivity of simulations to hail/graupel variation
Figure 12, and animations of the same fields in Figs. S6–S9, indicate that a change in the hail/graupel coefficient may substantially change the bowing radius of the MCS leading edge. The top- and middle-row panels in the left column of each six-panel frame in Fig. 12 show graupel and hail variations of WSM6, respectively. Neither the reflectivity bowing structure (cf. Figs. 12a,c) nor the 10-m wind (Figs. 12g,i) are substantially changed by the change from graupel-like to hail-like fall speeds. However, the rear-inflow jet weakens slightly (Figs. 12m,o), while the cold pool is more pronounced (Figs. 12s,u). Sensitivity of linear convection to the hail/graupel coefficient is also seen in Fig. 11. Adams-Selin et al. (2013b) found that graupel-like variants of microphysical schemes (i.e., smallest mean size) generated stronger cold pools and rear-inflow jets, and hence MCSs in these simulations bowed earlier than hail-like variants. This is in contrast to Figs. 12s–v, which show stronger cold pools in the hail variations, and little change in the rear-inflow jet at 850 hPa. However, in Figs. 12–14, we find that microphysical schemes are sensitive to small SKEB perturbations regarding MCS mode and radius of bow curvature. From this, we suggest that any conclusions about a given microphysical scheme’s performance may be misleading without, for example, a SKEB ensemble to account further for model error.
8. Conclusions
We have presented two progressive bow echoes, NEKS06 and KSOK13, simulated with multiple ensemble techniques: perturbed ICs and LBCs, mixed-microphysical parameterizations, and SKEB perturbations. All ensemble simulations of NEKS06 were poor, with only a few cherry-picked ensemble members simulating an MCS with a bowing structure. On the other hand, simulations of KSOK13 were mostly successful, with a progressive bow echo simulated in almost all cases, timing and location spread notwithstanding. As uncertainty decreases between different ensemble types in KSOK13, so do intermember differences in location and timing. However, the spread of the convective mode remains high, and the locations of strongest surface winds do not substantially lose variation. This suggests relatively high sensitivity to the microscale.
Simulated composite reflectivity fields showed that the spread of the convective mode in the ensembles using multiple microphysical schemes and those using SKEB perturbations was comparable. Overall uncertainty in the mixed-microphysics ensemble, however, was 1.5–2 times the spread in the SKEB ensemble, as measured by ensemble differences in kinetic and thermal energy. Changing the SKEB scheme’s decorrelation time from 5.5 to 0.5 h, with a prescribed microphysical scheme, increased the spread more than adding a 0.5-h SKEB scheme to a mixed-microphysics ensemble. Implementing the SKEB scheme does not noticeably bias the convective mode, but appears to normalize the extreme performers in a mixed-microphysics ensemble. For example, in SKEB ensembles using the “best” microphysics from a previous ensemble, many members are worse than the no-SKEB control. The SKEB ensemble spread itself is dependent on the flow regime, as expected (Berner et al. 2009), and on the microphysical scheme selected. Moreover, the change in the hail/graupel coefficient within the parameterizations can be critical for bow development, as in Adams-Selin et al. (2013a), and SKEB is itself sensitive to this coefficient. This highlights the complex nature of model error, something that may require stochasticity in the hail/graupel fall-speed coefficient itself, instead of an appended stochastic forcing scheme.
In KSOK13, the uncertainty from ICs and LBCs dominates other sources of uncertainty, while uncertainty from mixed microphysics dominates in NEKS06. That KSOK13 performed better with larger IC/LBC spread is expected from Murphy (1988). These larger differences in ICs/LBCs perturbed the positioning of MCSs but almost all members still formed a bow echo. This suggests in KSOK13 that IC/LBC differences primarily changed the MCS’s position and timing, but spread associated with model error primarily affected the mode of convection. Furthermore, varied mixed-microphysics and SKEB perturbations did not improve poor GEFS/R2 ICs/LBCs in NEKS06 and poor NAM ICs/LBCs in KSOK13. This appears to support the idea that small-scale errors (butterflies) are not significant when considering the overall model skill (Durran and Gingrich 2014), but are crucial to spread, and hence determining the likelihood of severe weather (correlated with the convective mode).
In light of these findings, we return to address the hypotheses in the first section:
Progressive bow echoes are inherently less predictable than other MCSs. This is most likely, as large convective mode spread is associated with uncertainty on the smallest scales, generated by mixed parameterizations and SKEB. The storm scale is known to have limited variance at short lead times, and has a much shorter predictability horizon than the synoptic scale. Both factors increase the importance of accounting for model uncertainty through perturbation techniques. The poor performance of NEKS06 suggests the ensemble spreads were insufficient to sample this hypothetical small region of phase space. We speculate that the predictability horizon may exist too soon to correctly simulate cell mergers or growing supercells that precede many bow echoes (Klimowski et al. 2003). The caveat is that KSOK13 shows that MCS mode can be a stable solution within a perturbed-IC and -LBC ensemble, even if the MCS’s position is displaced from that observed.
Progressive bow echoes are embedded in less predictable synoptic-scale regimes. If progressive bow echoes are indeed highly sensitive to model uncertainty, it follows that this sensitivity is compounded in a weakly forced regime, where perturbations related to model error dominate over IC/LBC perturbations. Both cases presented herein occur without particularly strong upstream height troughs. The dominance of mixed-microphysics ensemble uncertainty over IC/LBC uncertainty in NEKS06 may have contributed to its poor performance.
There is critical deficiency in ICs and LBCs. The success of KSOK13 but failure of NEKS06 leaves an unresolved issue here. Regardless, errors in IC/LBC datasets are unavoidable, and hence must be mitigated with well-dispersed ensembles. Our results suggest that improving the ICs and LBCs would yield better timing and positioning of MCS systems, but provide diminishing returns on MCS mode. Previous studies have raised concerns over the reduced variance on storm scales within global ensemble datasets used to drive limited-area models. While the present study does not address suitable spread directly, our results in KSOK13 do show that a 24–36-h simulation can successfully capture a progressive bow echo using a coarse, global, reforecast dataset; this driving dataset outperforms a limited-area analysis dataset.
There is critical deficiency in the microphysics parameterizations. The contrasting performance by mixed-microphysics ensembles between our two cases suggests that the ICs/LBCs or embedding regime were more important than error from parameterizations. Results showed that parameterizations are substantially sensitive to small perturbations, here introduced through a SKEB scheme, and this sensitivity is not regular. Hence, the component of error associated with parameterizations appears complex and strongly nonlinear. There is little relationship between the bowing radius and simulated wind speed, as in Wandishin et al. (2010), despite strong wind at all low-tropospheric model levels, but this may be due to a calculation unsuitable for bow echo events to estimate 10-m wind within WRF. Weak winds may also be related to problems within the mechanism of mixing winds in the PBL, but is outside the scope of the present study.
A key question remains outstanding: Is the lack of adequate dispersion in NEKS06 a cause or consequence of convective initiation failure? Model uncertainty grows to dominate IC/LBC uncertainty in strongly forced cases (Stensrud et al. 2000), where methods like mixed-microphysics and SKEB ensembles are needed to generate small-scale variance in the absence of convective foci. But in the results herein, substantial variance is not generated if convection never initiates. The new stochastic convective backscatter (SCB; Shutts 2015) scheme targets convection as the largest source of model error, but is unable to account for locational error in convection.
Further large-scale conclusions are difficult to make from two cases; future work should address the relationship of storm- and synoptic-scale predictability associated with MCSs. In addition, the impact of grid resolution on bow echo ensemble simulations is the subject of a recent submission (Lawson and Gallus 2016, manuscript submitted to Mon. Wea. Rev.).
Acknowledgments
The authors thank the following: three anonymous reviewers and the editor for their contributions toward improving the manuscript; Rebecca Adams-Selin and her colleagues for supplying the Fortran modifications and advice relating to the microphysical schemes; David John Gagne II, Tim Supinie, Daryl Herzmann, and David Flory for information technology (IT) assistance; Patrick Marsh, Chris Karstens, Jeff Duda, Adam Clark, and others at the Hazardous Weather Testbed (Norman, Oklahoma) for enlightening discussions; and Xiaoqing Wu, Andy VanLoocke, Ray Arritt, and William Gutowski for constructive criticism. We are grateful for the utility and power of open-source Python code and packages (matplotlib, numpy, basemap, netcdf4-python): the modern-day giants’ shoulders. This research was supported by NSF Grant AGS-1222384.
REFERENCES
Adams-Selin, R. D., Van den Heever S. C. , and Johnson R. H. , 2013a: Impact of graupel parameterization schemes on idealized bow echo simulations. Mon. Wea. Rev., 141, 1241–1262, doi:10.1175/MWR-D-12-00064.1.
Adams-Selin, R. D., Van den Heever S. C. , and Johnson R. H. , 2013b: Sensitivity of bow-echo simulation to microphysical parameterizations. Wea. Forecasting, 28, 1188–1209, doi:10.1175/WAF-D-12-00108.1.
Alhamed, A., Lakshmivarahan S. , and Stensrud D. J. , 2002: Cluster analysis of multimodel ensemble data from SAMEX. Mon. Wea. Rev., 130, 226–256, doi:10.1175/1520-0493(2002)130<0226:CAOMED>2.0.CO;2.
Aligo, E. A., Gallus W. A. Jr., and Segal M. , 2009: On the impact of WRF model vertical grid resolution on Midwest summer rainfall forecasts. Wea. Forecasting, 24, 575–594, doi:10.1175/2008WAF2007101.1.
American Meteorological Society, 2014: Mesoscale convective system. Glossary of meteorology. [Available online at http://glossary.ametsoc.org/wiki/Mesoscale_convective_system.]
Ancell, B. C., 2013: Nonlinear characteristics of ensemble perturbation evolution and their application to forecasting high-impact events. Wea. Forecasting, 28, 1353–1365, doi:10.1175/WAF-D-12-00090.1.
Anthes, R. A., Kuo Y.-H. , Baumhefner D. P. , Errico R. M. , and Bettge T. W. , 1985: Predictability of mesoscale atmospheric motions. Advances in Geophysics, Vol. 28B, Academic Press, 159–202, doi:10.1016/S0065-2687(08)60188-0.
Bentley, M. L., Mote T. L. , and Byrd S. F. , 2000: A synoptic climatology of derecho producing mesoscale convective systems in the north-central plains. Int. J. Climatol., 20, 1329–1349, doi:10.1002/1097-0088(200009)20:11<1329::AID-JOC537>3.0.CO;2-F.
Berner, J., Shutts G. J. , Leutbecher M. , and Palmer T. N. , 2009: A spectral stochastic kinetic energy backscatter scheme and its impact on flow-dependent predictability in the ECMWF Ensemble Prediction System. J. Atmos. Sci., 66, 603–626, doi:10.1175/2008JAS2677.1.
Berner, J., Ha S.-Y. , Hacker J. P. , Fournier A. , and Snyder C. , 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 1972–1995, doi:10.1175/2010MWR3595.1.
Buizza, R., 1997: Potential forecast skill of ensemble prediction and spread and skill distributions of the ECMWF Ensemble Prediction System. Mon. Wea. Rev., 125, 99–119, doi:10.1175/1520-0493(1997)125<0099:PFSOEP>2.0.CO;2.
Clark, C. A., and Arritt P. W. , 1995: Numerical simulations of the effect of soil moisture and vegetation cover on the development of deep convection. J. Appl. Meteor., 34, 2029–2045, doi:10.1175/1520-0450(1995)034<2029:NSOTEO>2.0.CO;2.
Davis, C. A., and Weisman M. L. , 1994: Balanced dynamics of mesoscale vortices produced in simulated convective systems. J. Atmos. Sci., 51, 2005–2030, doi:10.1175/1520-0469(1994)051<2005:BDOMVP>2.0.CO;2.
Duda, J. D., Wang X. , Kong F. , Xue M. , and Berner J. , 2016: Impact of a stochastic kinetic energy backscatter scheme on warm season convection-allowing ensemble forecasts. Mon. Wea. Rev., 144, 1887–1908, doi:10.1175/MWR-D-15-0092.1.
Durran, D. R., and Gingrich M. , 2014: Atmospheric predictability: Why butterflies are not of practical importance. J. Atmos. Sci., 71, 2476–2488, doi:10.1175/JAS-D-14-0007.1.
Fritsch, J. M., Kane R. J. , and Chelius C. R. , 1986: The contribution of mesoscale convective weather systems to the warm-season precipitation in the United States. J. Climate Appl. Meteor., 25, 1333–1345, doi:10.1175/1520-0450(1986)025<1333:TCOMCW>2.0.CO;2.
Gallus, W., Jr., Snook N. A. , and Johnson E. V. , 2008: Spring and summer severe weather reports over the Midwest as a function of convective mode: A preliminary study. Wea. Forecasting, 23, 101–113, doi:10.1175/2007WAF2006120.1.
Gallus, W., Jr., Lawson J. , and Squitieri B. J. , 2016: Practical versus intrinsic predictability of convective system details: A comparison of PECAN expert forecasts of MCS timing, bores, and pristine convective initiation with ensemble simulations of bow echoes. Proc. Special Symp. on Seamless Weather and Climate Prediction—Expectations and Limits of Multiscale Predictability, New Orleans, LA, Amer. Meteor. Soc., 2.3. [Available online at https://ams.confex.com/ams/96Annual/webprogram/Paper284443.html.]
Gilmore, M. S., Straka J. M. , and Rasmussen E. N. , 2004: Precipitation uncertainty due to variations in precipitation particle parameters within a simple microphysics scheme. Mon. Wea. Rev., 132, 2610–2627, doi:10.1175/MWR2810.1.
Hagedorn, R., Buizza R. , Hamill T. M. , Leutbecher M. , and Palmer T. N. , 2012: Comparing TIGGE multimodel forecasts with reforecast-calibrated ECMWF ensemble forecasts. Quart. J. Roy. Meteor. Soc., 138, 1814–1827, doi:10.1002/qj.1895.
Hamill, T. M., Bates G. T. , Whitaker J. S. , Murray D. R. , Fiorino M. , Galarneau T. J. , Zhu Y. , and Lapenta W. , 2013: NOAA’s second-generation global medium-range ensemble reforecast dataset. Bull. Amer. Meteor. Soc., 94, 1553–1565, doi:10.1175/BAMS-D-12-00014.1.
Hanley, K. E., Kirshbaum D. J. , Roberts N. M. , and Leoncini G. , 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112–133, doi:10.1175/MWR-D-12-00013.1.
Johns, R. H., and Hirt W. D. , 1987: Derechos: Widespread convectively induced windstorms. Wea. Forecasting, 2, 32–49, doi:10.1175/1520-0434(1987)002<0032:DWCIW>2.0.CO;2.
Keene, K. M., and Schumacher R. S. , 2013: The bow and arrow mesoscale convective structure. Mon. Wea. Rev., 141, 1648–1672, doi:10.1175/MWR-D-12-00172.1.
Klimowski, B. A., Bunkers M. J. , Hjelmfelt M. R. , and Covert J. N. , 2003: Severe convective windstorms over the northern high plains of the United States. Wea. Forecasting, 18, 502–519, doi:10.1175/1520-0434(2003)18<502:SCWOTN>2.0.CO;2.
Kühnlein, C., Keil C. , Craig G. C. , and Gebhardt C. , 2014: The impact of downscaled initial condition perturbations on convective-scale ensemble forecasts of precipitation. Quart. J. Roy. Meteor. Soc., 140, 1552–1562, doi:10.1002/qj.2238.
Lawson, J., 2013: Analysis and predictability of the 1 December 2011 Wasatch downslope windstorm. M.S. thesis, Department of Atmospheric Science, University of Utah, pp. [Available online at http://content.lib.utah.edu/utils/getfile/collection/etd3/id/2618/filename/2621.pdf.]
Lean, H. W., Clark P. A. , Dixon M. , Roberts N. M. , Fitch A. , Forbes R. , and Halliwell C. , 2008: Characteristics of high-resolution versions of the Met Office Unified Model for forecasting convection over the United Kingdom. Mon. Wea. Rev., 136, 3408–3424, doi:10.1175/2008MWR2332.1.
Leutbecher, M., and Palmer T. N. , 2008: Ensemble forecasting. J. Comput. Phys., 227, 3515–3539, doi:10.1016/j.jcp.2007.02.014.
Lilly, D. K., 1990: Numerical prediction of thunderstorms—Has its time come? Quart. J. Roy. Meteor. Soc., 116, 779–798, doi:10.1002/qj.49711649402.
Lorenz, E. N., 1963: Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130–141, doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.
Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21, 289–307, doi:10.1111/j.2153-3490.1969.tb00444.x.
Markowski, P., and Richardson Y. , 2010: Mesoscale Meteorology in Midlatitudes. Wiley-Blackwell, 407 pp.
Murphy, J. M., 1988: The impact of ensemble forecasts on predictability. Quart. J. Roy. Meteor. Soc., 114, 463–493, doi:10.1002/qj.49711448010.
Nastrom, G., and Gage K. , 1985: A climatology of atmospheric wavenumber spectra of wind and temperature observed by commercial aircraft. J. Atmos. Sci., 42, 950–960, doi:10.1175/1520-0469(1985)042<0950:ACOAWS>2.0.CO;2.
Oortwijn, J., 1998: Predictability of the onset of blocking and strong zonal flow regimes. J. Atmos. Sci., 55, 973–994, doi:10.1175/1520-0469(1998)055<0973:POTOOB>2.0.CO;2.
Palmer, T. N., 2001: A nonlinear dynamical perspective on model error: A proposal for nonlocal stochastic dynamic parametrization in weather and climate prediction models. Quart. J. Roy. Meteor. Soc., 127B, 279–304, doi:10.1002/qj.49712757202.
Palmer, T. N., Döring A. , and Seregin G. , 2014: The real butterfly effect. Nonlinearity, 27, R123, doi:10.1088/0951-7715/27/9/R123.
Przybylinski, R. W., 1995: The bow echo: Observations, numerical simulations, and severe weather detection methods. Wea. Forecasting, 10, 203–218, doi:10.1175/1520-0434(1995)010<0203:TBEONS>2.0.CO;2.
Rodwell, M. J., and Coauthors, 2013: Characteristics of occasional poor medium-range weather forecasts for Europe. Bull. Amer. Meteor. Soc., 94, 1393–1405, doi:10.1175/BAMS-D-12-00099.1.
Romine, G. S., Schwartz C. S. , Berner J. , Fossell K. R. , Snyder C. , Anderson J. L. , and Weisman M. L. , 2014: Representing forecast error in a convection-permitting ensemble system. Mon. Wea. Rev., 142, 4519–4541, doi:10.1175/MWR-D-14-00100.1.
Schwartz, C. S., Romine G. S. , Smith K. R. , and Weisman M. L. , 2014: Characterizing and optimizing precipitation forecasts from a convection-permitting ensemble initialized by a mesoscale ensemble Kalman filter. Wea. Forecasting, 29, 1295–1318, doi:10.1175/WAF-D-13-00145.1.
Schwartz, C. S., Romine G. S. , Weisman M. L. , Sobash R. A. , Fossell K. R. , Manning K. W. , and Trier S. B. , 2015: A real-time convection-allowing ensemble prediction system initialized by mesoscale ensemble Kalman filter analyses. Wea. Forecasting, 30, 1158–1181, doi:10.1175/WAF-D-15-0013.1.
Shutts, G., 2005: A kinetic energy backscatter algorithm for use in ensemble prediction systems. Quart. J. Roy. Meteor. Soc., 131, 3079–3102, doi:10.1256/qj.04.106.
Shutts, G., 2015: A stochastic convective backscatter scheme for use in ensemble prediction systems. Quart. J. Roy. Meteor. Soc., 141, 2602–2616, doi:10.1002/qj.2547.
Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.
Snively, D. V., and Gallus W. A. Jr., 2014: Prediction of convective morphology in near-cloud-permitting WRF model simulations. Wea. Forecasting, 29, 130–149, doi:10.1175/WAF-D-13-00047.1.
Stensrud, D. J., Bao J. W. , and Warner T. T. , 2000: Using initial condition and model physics perturbations in short-range ensemble simulations of mesoscale convective systems. Mon. Wea. Rev., 128, 2077–2107, doi:10.1175/1520-0493(2000)128<2077:UICAMP>2.0.CO;2.
Surcel, M., Zawadzki I. , and Yau M. K. , 2014: On the filtering properties of ensemble averaging for storm-scale precipitation forecasts. Mon. Wea. Rev., 142, 1093–1105, doi:10.1175/MWR-D-13-00134.1.
Tan, Z.-M., Zhang F. , Rotunno R. , and Snyder C. , 2004: Mesoscale predictability of moist baroclinic waves: Experiments with parameterized convection. J. Atmos. Sci., 61, 1794–1804, doi:10.1175/1520-0469(2004)061<1794:MPOMBW>2.0.CO;2.
Trapp, R. J., Wheatley D. M. , Atkins N. T. , Przybylinski R. W. , and Wolf R. , 2006: Buyer beware: Some words of caution on the use of severe wind reports in postevent assessment and research. Wea. Forecasting, 21, 408–415, doi:10.1175/WAF925.1.
Van Weverberg, K., van Lipzig N. P. M. , and Delobbe L. , 2011: The impact of size distribution assumptions in a bulk one-moment microphysics scheme on simulated surface precipitation and storm dynamics during a low-topped supercell case in Belgium. Mon. Wea. Rev., 139, 1131–1147, doi:10.1175/2010MWR3481.1.
Wandishin, M. S., Stensrud D. J. , Mullen S. L. , and Wicker L. J. , 2010: On the predictability of mesoscale convective systems: Three-dimensional simulations. Mon. Wea. Rev., 138, 863–885, doi:10.1175/2009MWR2961.1.
Weisman, M. L., 1993: The genesis of severe, long-lived bow echoes. J. Atmos. Sci., 50, 645–670, doi:10.1175/1520-0469(1993)050<0645:TGOSLL>2.0.CO;2.
Zhang, F., Snyder C. , and Rotunno R. , 2003: Effects of moist convection on mesoscale predictability. J. Atmos. Sci., 60, 1173–1185, doi:10.1175/1520-0469(2003)060<1173:EOMCOM>2.0.CO;2.
The “randomness” is via a seed integer specified in the WRF namelist. Hence, unlimited independent ensemble members can be created by changing this value.
Note the seeds used in STCH are different from those specified in STMX.
The seeds used in STCH5 are identical to those in STCH to gauge the effect of increasing decorrelation time.
DTE can be formulated using this constant value, or as in Tan et al. (2004), via use of a reference temperature.
We include only GEFS/R2 members here and for KSOK13 to compare the spread between experiments. The inclusion of the NAM-driven member would substantially inflate the ensemble spread. Spread of a mixed-model ensemble approach is outside the scope of the present study.