Impact of Assimilating Dropsonde Observations from MPEX on Ensemble Forecasts of Severe Weather Events

Glen S. Romine National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Glen S. Romine in
Current site
Google Scholar
PubMed
Close
,
Craig S. Schwartz National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Craig S. Schwartz in
Current site
Google Scholar
PubMed
Close
,
Ryan D. Torn National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Ryan D. Torn in
Current site
Google Scholar
PubMed
Close
, and
Morris L. Weisman National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Morris L. Weisman in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Over the central Great Plains, mid- to upper-tropospheric weather disturbances often modulate severe storm development. These disturbances frequently pass over the Intermountain West region of the United States during the early morning hours preceding severe weather events. This region has fewer in situ observations of the atmospheric state compared with most other areas of the United States, contributing toward greater uncertainty in forecast initial conditions. Assimilation of supplemental observations is hypothesized to reduce initial condition uncertainty and improve forecasts of high-impact weather.

During the spring of 2013, the Mesoscale Predictability Experiment (MPEX) leveraged ensemble-based targeting methods to key in on regions where enhanced observations might reduce mesoscale forecast uncertainty. Observations were obtained with dropsondes released from the NSF/NCAR Gulfstream-V aircraft during the early morning hours preceding 15 severe weather events over areas upstream from anticipated convection. Retrospective data-denial experiments are conducted to evaluate the value of dropsonde observations in improving convection-permitting ensemble forecasts. Results show considerable variation in forecast performance from assimilating dropsonde observations, with a modest but statistically significant improvement, akin to prior targeted observation studies that focused on synoptic-scale prediction. The change in forecast skill with dropsonde information was not sensitive to the skill of the control forecast. Events with large positive impact sampled both the disturbance and adjacent flow, akin to results from past synoptic-scale targeting studies, suggesting that sampling both the disturbance and adjacent flow is necessary regardless of the horizontal scale of the feature of interest.

The National Center for Atmospheric Research is sponsored by the National Science Foundation.

Corresponding author address: Glen Romine, NCAR/MMM, P.O. Box 3000, Boulder, CO 80307-3000. E-mail: romine@ucar.edu

Abstract

Over the central Great Plains, mid- to upper-tropospheric weather disturbances often modulate severe storm development. These disturbances frequently pass over the Intermountain West region of the United States during the early morning hours preceding severe weather events. This region has fewer in situ observations of the atmospheric state compared with most other areas of the United States, contributing toward greater uncertainty in forecast initial conditions. Assimilation of supplemental observations is hypothesized to reduce initial condition uncertainty and improve forecasts of high-impact weather.

During the spring of 2013, the Mesoscale Predictability Experiment (MPEX) leveraged ensemble-based targeting methods to key in on regions where enhanced observations might reduce mesoscale forecast uncertainty. Observations were obtained with dropsondes released from the NSF/NCAR Gulfstream-V aircraft during the early morning hours preceding 15 severe weather events over areas upstream from anticipated convection. Retrospective data-denial experiments are conducted to evaluate the value of dropsonde observations in improving convection-permitting ensemble forecasts. Results show considerable variation in forecast performance from assimilating dropsonde observations, with a modest but statistically significant improvement, akin to prior targeted observation studies that focused on synoptic-scale prediction. The change in forecast skill with dropsonde information was not sensitive to the skill of the control forecast. Events with large positive impact sampled both the disturbance and adjacent flow, akin to results from past synoptic-scale targeting studies, suggesting that sampling both the disturbance and adjacent flow is necessary regardless of the horizontal scale of the feature of interest.

The National Center for Atmospheric Research is sponsored by the National Science Foundation.

Corresponding author address: Glen Romine, NCAR/MMM, P.O. Box 3000, Boulder, CO 80307-3000. E-mail: romine@ucar.edu

1. Introduction

Continuous advances in numerical weather prediction models, data assimilation methodologies, and observing of the atmospheric state have all contributed toward steady improvement in predictive skill. Still, deficiencies in forecasts, particularly for high-impact convective weather events, have often been attributed to inadequacies in the initial conditions of numerical weather predictions (e.g., Weisman et al. 2008; Clark et al. 2010). One approach to improve initial conditions is to supplement the conventional observational network with targeted observations that in turn should reduce forecast uncertainty for specific forecast outcomes.

For synoptic-scale forecast applications, several efforts have previously sought to reduce initial condition uncertainty via targeted observations over data-sparse regions, such as Langland (2005), Buizza et al. (2007), Majumdar et al. (2011, hereafter M11), and Hamill et al. (2013). International targeted observing campaigns included the Fronts and Atlantic Storm Track Experiments (FASTEX; Joly et al. 1999), the North Pacific Experiment (NORPEX; Langland et al. 1999), the Winter Storm Reconnaissance Program (WSR; Szunyogh et al. 2000), and the programs under the auspices of The Observing System Research and Predictability Experiment (THORPEX;1 M11; Parsons et al. 2016) among others. These prior targeted observation campaigns largely concentrated on synoptic-scale systems and sought to improve global model weather prediction with 1–3 days of lead time. They used a variety of techniques to identify source regions of initial condition uncertainty that had the potential to lead to rapid forecast error growth and were suitable for targeted sampling.

After observation collection, impact studies (e.g., Baker and Daley 2000; Langland and Baker 2004) assess changes in initial condition uncertainty and forecast error owing to the assimilation of particular observation sets (e.g., Ancell and Hakim 2007; Zhu and Gelaro 2008; Liu and Kalnay 2008; Torn 2014; Sommer and Weissmann 2014). Despite the clear dynamical link between initial condition uncertainty and forecast uncertainty, past targeting studies have routinely struggled to demonstrate large reductions in forecast error. For example, typical error reductions are on the order of 10% (Langland 2005) or less (e.g., Hamill et al. 2013), with widely mixed results on a case-by-case basis (e.g., Buizza et al. 2007). Reasons given for limited impact include incomplete sampling of the target region in space and time or with sufficiently fine spacing between observations, lack of coupling between the approach to identify targets and the data assimilation system, observation and model errors, forecast-sensitive errors in the target region, and small forecast errors present before targeted observations are assimilated (e.g., M11 and references therein). Despite these past challenges in observation targeting, M11 expected greater opportunity to exist in targeting mesoscale systems with regional prediction systems aimed at high-impact weather events within the 1-day period.

Given a well-performing data assimilation system, observations need only provide minor adjustments to the background state. Moreover, adjoint-based observation impact studies often find that only a small majority of assimilated observations in operational forecast systems lead to a reduction in forecast errors (e.g., Aberson 2003; Gelaro et al. 2010; Lorenc and Marriott 2014). These studies find the greatest positive impact is realized from collections of observations with small individual increments instead of a handful of key observations with particularly large analysis increments. In contrast, the typical approach in observation targeting is to identify a handful of additional observations in locations within or adjacent to error growth source regions that are anticipated to significantly reduce the initial condition and subsequent forecast uncertainty. M11 further noted that the impact of a group of observations on a particular forecast depends on the following factors, which are carefully considered in this study:

  • Errors that are present in the background forecast without targeted observations.

  • Errors in the observations.

  • The data assimilation and forecast methods employed.

Over the central Great Plains, mid- to upper-tropospheric weather disturbances often modulate severe storm development. These disturbances frequently pass over the Intermountain West region of the United States during the early morning hours preceding severe weather events. This region has fewer observations of the atmospheric state compared with other areas of the United States, which has the potential to contribute toward greater uncertainty in forecast initial conditions. To assess whether assimilation of supplemental observations could reduce initial condition uncertainty and improve forecasts of high-impact weather, during the late spring of 2013 (mid-May through mid-June), a targeted observation field campaign was conducted in the Intermountain West and adjacent high plains region of the United States. The Mesoscale Predictability Experiment (MPEX; Weisman et al. 2015) leveraged ensemble sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) among other approaches to identify mid- and upper-tropospheric disturbances appropriate for targeting owing to their potential to reduce errors and forecast uncertainty of convective weather events in the central plains. For each of the 15 intensive observing periods (IOPs), 20–30 dropsondes were released from the NSF/NCAR Gulfstream-V aircraft (GV) over a subsynoptic sampling area upstream of anticipated severe weather events during the early morning hours (0900–1500 UTC). To assess the impact of dropsonde observations, data-denial experiments are used, where particular observations are withheld from the assimilation system for otherwise identical assimilation experiments followed by forecasts from the pair of initial states.

The remainder of the paper is organized as follows. Methodology is provided in section 2, with results of ensemble analysis and forecast experiments with and without dropsondes offered in section 3, followed by a discussion of the results (section 4), and the conclusions (section 5).

2. Methodology

Ensemble forecasts are initialized from an hourly cycled mesoscale (15-km horizontal grid spacing) ensemble analysis either with or without the assimilation of dropsonde observations. Descriptions of the forecast model and analysis system follow, along with a brief description of the dropsonde observations and real-time observation targeting strategy.

a. WRF Model description

Convection-permitting (3-km horizontal grid spacing) 30-member ensemble forecasts are initialized for each IOP by downscaling 15-km analyses from a 50-member continuously cycled mesoscale ensemble data assimilation analysis system (described in section 2b). The 15-km mesoscale analysis domain covers much of North America and adjacent areas, while the 3-km nest covers the MPEX sampling area over the Intermountain West and areas downstream where forecasts of convective weather events had the potential to be improved by dropsonde observations within 24 h of launch (Fig. 1). This study focuses on the performance of convection-permitting ensemble forecasts on the nest domain only over the MPEX region for forecasts from initial conditions with or without dropsonde information. The Weather Research and Forecasting (WRF) Model (Skamarock et al. 2008) is used to integrate the ensemble analysis states using positive definite moisture advection (Skamarock and Weisman 2009) with all members using the same model configuration (Table 1). Both the 15- and 3-km analyses are integrated together such that the 15-km grid provides lateral boundary conditions for the 3-km domain. Unique lateral boundary conditions for each member on the 15-km grid are drawn from 0.5° GFS forecasts combined with random draws from global background error covariances (fixed covariance perturbation method; Torn et al. 2006) provided by the WRF variational data assimilation system (WRF-VAR; Barker et al. 2012). A 75-s (18.75 s) time step is applied for the outer (inner) domain to integrate the ensemble states for 33 h.

Fig. 1.
Fig. 1.

Analysis (15 km) and forecast domains (15 and 3 km) used for retrospective analyses and forecasts.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Table 1.

Physical parameterizations used in WRF Model forecasts. Cumulus parameterization was not used on the convection-allowing 3-km grid.

Table 1.

b. Analysis system description

The analysis system uses a 50-member ensemble adjustment Kalman filter (EAKF; Anderson 2001, 2003) within the Data Assimilation Research Testbed (DART; Anderson et al. 2009) toolkit, with specific options listed in Table 2. In summary, this analysis system for the retrospective runs is analogous to the configuration used in the real-time system that operated during the MPEX field campaign (Schwartz et al. 2015), except the real-time analysis system used 6-hourly cycling while retrospective analyses have hourly updates (reasons given in section 2d). In both the real-time and retrospective runs, the EAKF was continuously cycled only on the 15-km grid. For each IOP the background for the first ensemble analysis is initialized from a 6-h forecast from an 1800 UTC real-time analysis (Fig. 2). Then, hourly cycling occurs from 0000 to 1500 UTC, overlapping the period of dropsonde observing (within 0900–1515 UTC; most often a 4-h period toward the beginning of this window). The same routine observation types assimilated in the real-time system were also used in the retrospective hourly cycling experiments (CNTL), except observation windows around each analysis time were reduced (see Table 2). This analysis system does not include the assimilation of radiance observations. For the experiments that also assimilated dropsonde observations (DROP), those observations were assumed valid at the nearest hourly analysis time. Dropsonde observations of temperature, specific humidity, and horizontal winds were assimilated. From these retrospective analyses, two sets of ensemble forecasts are made with (DROP) and without (CNTL) assimilated dropsonde observations for each IOP. Only the hourly cycled DROP experiment assimilated dropsonde observations.

Table 2.

DART options and settings.

Table 2.
Fig. 2.
Fig. 2.

Timeline of the retrospective analysis and cycling experiments with respect to the IOP timeline. Real-time analyses were done every 6 h, while for retrospective analyses hourly cycling began from 0000 UTC for each IOP. Dropsondes were released between 0900 and 1500 UTC during each IOP, with ensemble forecasts initialized and subsequent verification starting from 1500 UTC.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

c. Dropsonde observations

The GV aircraft is equipped with an Airborne Vertical Atmospheric Profiling System (AVAPS; Hock and Franklin 1999) used to release parachuted dropsondes that produce quasi-vertical profiles of horizontal wind, temperature, humidity, and pressure. Details on the sensor characteristics are described in sidebar 1 of Weisman et al. (2015). Flight missions were typically 6–8 h in duration at a cruising altitude near 180 hPa. Flights departed from Broomfield, Colorado, and given the limited flight duration this motivated selection of waypoints to be biased toward the center of the MPEX domain instead of farther west where routine observations were less common (e.g., Fig. 3b). The limited flight durations permitted the release of about 20–30 dropsondes per IOP spaced approximately 100 km apart over the target regions with sampling occurring during local morning hours on mission days.

Fig. 3.
Fig. 3.

(a) Approved dropsonde release locations (red stars), routine rawinsonde release sites (blue dots), and terrain height (fill) in the “MPEX” region; (b) number of routine observations within 100 km of a grid point in the horizontal at any height during the analysis period of 0900–1500 UTC 15 May 2013.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Dropsonde observations were prepared for assimilation by an initial visual inspection to remove gross failures (e.g., parachute failed to deploy, sensor malfunctions) followed by automated quality control and vertical thinning to mandatory and significant levels using NCAR’s Atmospheric Sounding Processing Environment (ASPEN, version 3.1) software package (Martin 2007) consistent with processing used in real-time hurricane environment sampling by numerous international agencies including NOAA’s Hurricane Research Division. Observation errors were assumed to be the same as climatological values for routine rawinsondes.

To ensure sufficient dropsonde observation quality for use in assimilation experiments, the CNTL experiment evaluated the full set of dropsonde observations. Here evaluation means converting the model state conditions to measured variables and locations to directly compare observations and model state backgrounds. The summary fit to the control analysis for all dropsonde observations for temperature, moisture, and horizontal winds are shown in Fig. 4. Dropsonde observations were found to have a similar quality and fit as routine rawinsonde observations with RMS errors comparable in magnitude to climatological errors for the latter, although larger than routine radiosonde bias was identified in 400–700-hPa zonal wind (Fig. 4c) and greater RMS error for temperature and meridional wind measurements were found near the surface2 (Figs. 4a,d). Humidity observations from dropsondes are on average drier than the analysis background state (Fig. 4b). At least a portion of this dry bias is likely associated with a recently reported error in humidity observations from the AVAPS system. Holger Vömel (2016, personal communication) finds a dry bias in moisture observations that is maximized under extreme cold and dry conditions. For MPEX dropsonde moisture observations, this impact is confined to the mid- to upper troposphere. Corrected moisture observations were compared against those used in this study for the entire MPEX dropsonde dataset. The mean difference between corrected and original moisture observations is smaller than the specified observation errors aloft, while for moisture observations in the lower troposphere the impact of the correction was negligible. Collectively, this humidity observation error is not expected to have a significant impact on the results shown here.

Fig. 4.
Fig. 4.

Mean fit of dropsonde observations to the CNTL analyses for all MPEX cases for (a) temperature (K), (b) specific humidity (g kg−1), (c) zonal wind component (m s−1), and (d) meridional wind component (m s−1). Profiles shown are mean innovation (green dashed), RMS innovation (red), square root of total error variance (blue dashed), total observation count (black circle), and observation count passing internal quality control (black plus). Observations are binned every 50 hPa in the vertical.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Over the MPEX sampling region (Fig. 3a), the addition of dropsonde observations led to a regional increase in observation counts similar to that of rawinsondes over the entire MPEX region, but with higher concentration focused over sampled regions. In the horizontal, there are about 5 times fewer routine observations available west of the Continental Divide than points farther east (Fig. 3b), particularly away from upper-air sounding locations and aircraft reports from busier airports (e.g., Salt Lake City, Utah; Phoenix, Arizona), confirming that in a relative sense observation availability is indeed lower in the Intermountain West. Considering the vertical distribution of observations, the number of routine observations available at 200–250 hPa (typical commercial aircraft flight level) and the surface was much larger than at other levels (Fig. 5), so dropsonde observations (in black in Fig. 5) contributed a relatively larger boost to the total observation counts in the midtroposphere.

Fig. 5.
Fig. 5.

Observation counts in the vertical for the most common routine observation platforms, as well as dropsondes, over the MPEX region (Fig. 3a) for all IOPs between 0000 and 1500 UTC.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

d. Real-time observation targeting

The same analysis and forecast system used in real time during MPEX to generate targeting guidance is used for the retrospective observation impact study. The intention of this approach is to maximize potential impact from supplemental dropsonde observations (e.g., M11). Decision-making guidance originated from a real-time continuously cycling ensemble analysis system (see section 2b), which provided initial conditions for 30-member 3-km horizontal grid spacing ensemble forecasts launched twice daily. ESA is an objective tool for understanding the dynamics of forecast errors (Torn and Hakim 2008), which can be used to estimate relationships between uncertain forecast metrics (e.g., 3-h accumulated precipitation) to an earlier model state (e.g., initial condition 700-hPa temperatures). Thus, ESA was performed on the real-time 30-member ensemble forecasts preceding events of interest to estimate where observations during the morning IOP would reduce the uncertainty in convective forecasts later in the day. Typically the ESA used 36-h forecasts to compute the sensitivity rather than shorter forecasts closer to the anticipated event of interest. This was necessary because the flight plan for the GV needed to be submitted well in advance of the target event (before 0600 UTC daily). The drawback to this approach is that the estimated area of uncertainty could be reduced by subsequent analysis cycles or that the area of greatest sensitivity could considerably change relative to earlier forecast estimates. Further, because of the 6-h sampling window for the dropsondes, an assumption that all the dropsondes were simultaneously collected at 1200 UTC was not justified. The real-time analysis system was limited to 6-hourly updates owing to computational and practical constraints, but in retrospective experiments we were able to conduct hourly cycling to reduce time-dependent background errors. Yet, this change in cycling updates provided the retrospective assimilation system with additional routine observations previously excluded in the real-time 6-hourly cycling (Table 2). The adaptive inflation within the analysis system then adjusts to the change in the observation network over several subsequent analysis cycles leading to an increase in inflation magnitude in the vicinity of the new observations. The impact of the change in cycling update frequency, with each IOP hourly assimilation beginning at 0000 UTC, is reduced by providing at least nine analysis cycles before the first dropsondes are (potentially) assimilated. Moreover, the hourly cycled analyses have different analysis error characteristics relative to the real-time analysis used to initialize forecasts that provided observation targeting guidance. Thus, targeting guidance may poorly represent the information needs of the retrospective analyses.

Forecast guidance contributing to the targeting decisions driving where the dropsondes would be released often followed the guidance provided by ESA. Notably, Garcies and Homar (2014) investigated different approaches to generating sensitivity guidance and assessed their relative value for observation targeting. They found little difference among particular techniques in generating sensitivity guidance and further found locations of maximum sensitivity were not necessarily more beneficial to sample than sampling anywhere within the sensitive region. This contrasts with the findings of several prior studies of midlatitude systems (e.g., Majumdar et al. 2001, 2002a,b; Buizza et al. 2007; Sellwood et al. 2008). Guardedly, it is expected that the real-time ESA products provided comparable sensitivity guidance to that which could have been generated using other methods, and consistent with the recommendations of M11 our approach uses the same analysis and forecast system for generating targeting guidance as is used in the retrospective analyses and assessment of observation impact.

3. Results

Following is a review of the dropsonde observation impact on ensemble analyses and forecasts from the CNTL and DROP experiments. A positive impact case is first presented, followed by a summary of impact over all IOPs. Then neutral and negative impact cases are reviewed to provide a demonstration of the range of impact from assimilation of dropsondes during MPEX.

a. A prototypical IOP: 19 May 2013

An example of dropsonde impact on ensemble forecasts is first demonstrated for the 19 May 2013 event (IOP 4). ESA guidance indicated precipitation coverage and intensity over eastern Kansas would be sensitive to an upper-level disturbance passing over portions of northeast New Mexico and southeast Colorado during the morning hours of 19 May 2013 (not shown). Real-time guidance also suggested isolated convection would develop over parts of central Oklahoma, but ESA indicated the characteristics of these storms were not particularly sensitive to the uncertainty in the upstream midtropospheric state. By contrast, 24-h lead time ESA guidance did identify sensitivity between midtropospheric disturbances and central Oklahoma convection (Torn and Romine 2015), highlighting the challenges in tracking the evolving information needs of the forecast system. The GV was dispatched the morning of 19 May to deploy dropsondes primarily along the cyclonic side of an upper-tropospheric jet from Utah to Kansas (Fig. 6d). The observed precipitation development and evolution was similar to the forecast guidance across parts of Kansas and Oklahoma, with a rapidly organizing linear convective system across central and eastern Kansas and several discrete tornadic supercells from central to northern Oklahoma during the late afternoon and evening hours (e.g., Weisman et al. 2015).

Fig. 6.
Fig. 6.

The 300-hPa isotachs (fill) and wind vectors (kt, 1 kt = 0.5144 m s−1), valid at 1500 UTC for (a)–(o) each IOP, overlain with locations of dropsonde observations for each IOP (black stars).

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Retrospective assimilation of dropsonde observations for this event led to small changes in the atmospheric state relative to the control analysis, with generally modest differences between the dropsonde observations and the background state (Fig. 7). Most of the differences in the ensemble mean state look noisy. But, some temperature analysis differences on the meso-β scale are evident, such as cooling at 700 and 850 hPa (Figs. 7c,d) in western Kansas, and additional cooling across eastern New Mexico and portions of the Texas Panhandle at 500 hPa (Fig. 7b), while warming is noted across portions of Colorado, Kansas, and the panhandles of Oklahoma and Texas at 300 hPa (Fig. 7a). Temperature increments for assimilated dropsondes are generally the same sign and magnitude as differences in the ensemble mean temperature analyses between DROP and CNTL after assimilating all dropsondes. Wind increments at dropsonde locations are generally quite small, with the most consistent trend noted at 700 and 850 hPa with greater anticyclonic shear and a more northerly wind increment, respectively. Moreover, there is a considerable amount of seemingly random “noise” introduced by the assimilation of dropsondes, which is of equal or greater magnitude to mesoscale patterns still evident after a short integration. This is expected owing to both instrument and representativeness errors contained in the dropsonde observations.

Fig. 7.
Fig. 7.

Difference in ensemble mean temperature between DROP and CNTL analysis (fill) at 1500 UTC, overlain with station plots for each assimilated dropsonde, the dot color represents the difference in temperature for the ensemble mean prior and posterior state, while wind vectors similarly indicate the vector wind difference between the prior and posterior state in observation space for pressure levels of (a) 300, (b) 500, (c) 700, and (d) 850 hPa.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Integration of the CNTL and DROP ensemble states led to amplifying differences in the ensemble mean forecast evolution of the midtropospheric state, particularly after the development of convection during the afternoon hours (Figs. 8, 9). The initial noisy aspects in the ensemble mean difference are less evident after a short integration. The largest magnitude differences were spatially confined and associated with modest changes in the position and deepening of a lead shortwave disturbance embedded within the synoptic trough, which led to an eastward shift in the location of the convective line (Figs. 10c,d) and associated diabatic heating (Figs. 8b–d) and moistening (Figs. 9b–d) in the ensemble forecasts. The impact on the convective evolution was generally on mesobeta scales, with less unobserved convective development in the DROP forecast (circled areas in Figs. 10a,b). Also, the forecast location of the linear convective system over eastern Kansas later in the convective evolution was closer to the observed convective system (Figs. 10c,d). Notably, the more discrete convective evolution over Oklahoma was quite similar between DROP and CNTL (Fig. 10), which is consistent with the guidance from the real-time ESA.

Fig. 8.
Fig. 8.

Difference in ensemble mean temperature (fill) and wind (vectors) at 500 hPa for (a) 1500 UTC analysis time, (b) 1800 UTC valid forecast, (c) 2100 UTC valid forecast, and (d) 0000 UTC valid forecast, for the IOP4 experiment.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 9.
Fig. 9.

Difference in ensemble mean specific humidity (fill) and wind (vectors) at 700 hPa for (a) 1500 UTC analysis time, (b) 1800 UTC valid forecast, (c) 2100 UTC valid forecast, and (d) 0000 UTC valid forecast, for the IOP4 experiment.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 10.
Fig. 10.

Filled contours of simulated reflectivity for the first 10 members of ensemble forecasts (colors) for (a),(c) CNTL and (b),(d) DROP and observed reflectivity (black) at or exceeding 45 dBZ for IOP4 valid (a),(b) 2100 UTC 19 May and (c),(d) 0000 UTC 20 May 2013.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

In summary, the dropsonde information for IOP 4 led to modest changes in the analysis state in a region sensitive to forecast error and reduced coverage of forecast precipitation in some areas where precipitation was not observed, while also improving the timing and location of the convective event over Kansas.

b. Bulk dropsonde impact on forecast performance

During MPEX, a broad spectrum of disturbance sizes and amplitudes were sampled (Fig. 6) as well as a range of convective event types (Table 3). All IOP sampling areas were identified through a combination of conventional forecast guidance and ESA, which led to sampling of a broad range of mesoscale through synoptic-scale disturbances. Given the modest number of IOPs (15) and the range of disturbances, in lieu of stratifying the events into different categories, events are here verified in bulk with an aim to discriminate between the DROP and CNTL experiments to assess whether assimilation of dropsondes can improve convective weather forecasts on average. Since there are few observation sources with sufficient spatial resolution to verify meso-β-scale forecast differences, verification is focused on comparing precipitation forecasts against stage-IV precipitation analyses (ST4; Lin and Mitchell 2005). Additional verification against routine observations is also discussed later.

Table 3.

List of IOPs with dropsonde observations including date, location of forecast event of interest, and the convective organization of the observed events.

Table 3.

For precipitation, the verification region is customized for each mission to center over the geographic region with the largest noted differences in 9-h accumulated precipitation forecasts between the DROP and CNTL experiments (Fig. 11). While not expected to impact interpretation of results, the number of model grid points included in each 900 km2 box is constant, while the number of ST4 verification points, which are available on a polar stereographic projection, varied by latitude for each verification box. Qualitatively, results are similar when a larger geographic area is considered encompassing all of the individual event verification regions (stippled region in Fig. 11). Notably, the verification regions are much larger and over a longer time window than the forecast metric regions used in real-time ESA (3-h accumulated precipitation, variable-sized region typically 120 km2).

Fig. 11.
Fig. 11.

Precipitation verification regions used for each IOP (colored boxes) and fixed verification region for real-time forecast comparison (stippled).

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Precipitation verification metrics presented include fractions skill scores (FSSs; Roberts and Lean 2008), areas under the relative operating characteristic (ROC) curve (e.g., Mason and Graham 2002), and attributes statistics (Wilks 2006) to assess the performance of each forecast experiment across a range of accumulation thresholds and neighborhood sizes. Confidence intervals to discriminate between DROP and CNTL forecasts are generated using pairwise difference bootstrapping (Hamill 1999) with 10 000 resamples. Tests were also done using block bootstrapping (Wilks 1997) with a 4-h block size to account for possible temporal correlations, but this did not change results. Forecasts from the convection-permitting ensemble are verified using a neighborhood ensemble verification technique (Schwartz et al. 2010, 2014). High-resolution model grids allow forecasts of individual convective elements that, when directly compared against observed precipitation elements, leads to large errors from qualitatively similar forecasts and observations. In the neighborhood approach to forecast verification (e.g., Roberts and Lean 2008), the evaluation allows for limited spatial uncertainty in skill metrics that is more consistent with how a forecaster subjectively perceives a forecast’s value.

First, we consider forecast metrics accumulated over the first 15 forecast hours. On average, the DROP experiments produce less areal coverage of precipitation for a given rain-rate threshold than CNTL for all rain-rate thresholds (Fig. 12a). At light thresholds, the DROP experiment leads to greater underprediction of precipitation area than CNTL, but at higher rain-rate thresholds assimilation of dropsondes reduces the bias relative to CNTL (Fig. 12a). Forecast skill is significantly improved at the 95% level for the DROP experiment across a range of precipitation thresholds (e.g., up to 5 mm h−1) as shown by higher areas under the ROC curve (Fig. 12b) and FSSs (Fig. 13). The limited number of IOPs combined with the rarity of intense rain rates precludes discrimination of forecasts at more intense rain rates. FSS differences are larger at higher rain-rate thresholds (e.g., convective precipitation) where DROP forecasts are noted to have reduced bias (Fig. 12a). Attributes diagrams indicate DROP forecasts also improve reliability relative to the CNTL forecasts, particularly for the higher forecast probabilities (Fig. 14).

Fig. 12.
Fig. 12.

(a) Range of precipitation bias for 30-member ensemble forecasts as a function of accumulated precipitation threshold for CNTL (red) and DROP (blue) along with ensemble mean bias for CNTL (white, solid) and DROP (white, dashed), with overlap in area (purple), and (b) ROC area as a function of accumulated precipitation threshold for a 50-km neighborhood. For (b), CIs for each precipitation threshold are also shown such that where CIs do not include zero, the differences between DROP and CNTL are significant at the 95% level (right axis).

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 13.
Fig. 13.

FSS aggregated over the first 15 forecast hours for all MPEX IOPs as a function of neighborhood radius for hourly accumulation thresholds of (a) 0.25, (b) 1.0, and (c) 10.0 mm h−1 for ensemble forecasts from CNTL (red) and DROP (blue). The horizontal line is the zero line for the bootstrap CIs. Where overlain CIs do not include zero, differences between DROP and CNTL are statistically significant at the 95% level (right axis).

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 14.
Fig. 14.

Attributes diagrams encompassing all IOPs for CNTL (red) and DROP (blue) experiments for hourly accumulation thresholds of (a) 0.25, (b) 1.0, and (c) 10 mm h−1 for forecast hours 1–15 for a 50-km radius neighborhood. Overlain are percentile counts for each probability threshold (stars), where percentile counts for CNTL (red) are behind those for DROP (blue) where not visible. Also, where +(−) are shown for a given forecast probability, the DROP (CNTL) forecast was significantly closer to perfect reliability (black diagonal) for confidence intervals at the 95% level. Climatological rates are shown in black horizontal lines for each threshold, with the skill line (black dashed diagonal) midway between the climatological rate and perfect reliability.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

The source for the improved reliability of DROP over CNTL forecasts is not entirely clear. Since assimilating extra observations reduces the analysis error variance (Whitaker and Hamill 2002), initial ensemble spread is reduced, and forecast ensemble spread should also be reduced provided constant error growth rates between ensemble forecasts with and without dropsondes. Yet, the adaptive inflation algorithm within DART (Anderson 2009) responded to the change in the observation network, with the appearance of dropsondes, by increasing the inflation magnitude in the vicinity of the sampling region. As the sampling period ends (typically by 1300 UTC), the inflation algorithm again responds to this change by gradually dampening the inflation magnitudes, but this response was still in progress during the several cycles following the end of dropsonde observations in the analysis window. As such, the boosted inflation for the dropsonde assimilating experiments results in slightly larger analysis variance than for the control experiment in areas where dropsonde observations are assimilated. The larger analysis variance (initial condition spread) likely contributes to the improved forecast reliability for the dropsonde-affected forecasts. Simultaneously, information from the assimilated dropsonde observations likely reduces error in the analysis that also can lead to more reliable ensemble forecasts. Discrimination between the DROP and CNTL forecast skill extends well beyond the initial 15 h, with statistical significance at many times and thresholds even beyond 24 h into the forecast (Fig. 15).

Fig. 15.
Fig. 15.

Average FSS for all IOPs as a function of forecast lead time at accumulation thresholds of (a) 0.25, (b) 1.0, and (c) 10.0 mm h−1 for ensemble forecasts for experiments CNTL (red) and DROP (blue), along with real-time forecast skill from 1200 UTC initialized forecasts for a neighborhood size of 50 km. Where plus signs are shown, at that forecast hour at 95% CI marker colors indicate (i) yellow: DROP more skillful than real-time, (ii) green: CNTL more skillful than real-time, and (iii) blue: DROP more skillful than CNTL.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Forecasts are also evaluated against rawinsondes, METAR, and mesonet observations. Little difference is noted between CNTL and DROP forecasts. Verification against rawinsonde observations reveals a small reduction in moisture bias, while a small increase in RMS error and bias for the zonal wind component is found near the tropopause (9-h forecasts; not shown). No notable differences in verification statistics are revealed by verification against METAR observations. A slight improvement in DROP forecasts is found in verification against mesonet observations for surface temperature only (reduced RMS error of ~0.1 K from about 3–9 h into the forecasts, not shown) over the MPEX verification region (Fig. 11). These small differences, particularly evaluating with synoptic-scale observing networks (e.g., rawinsondes), is not surprising in light of the limited areal coverage of forecast impacts on precipitation relative to the typical spacing of routine observations. Consistently, the more spatially dense mesonet observations provide the best sampling of the impact. The forecast differences in the mid- to upper troposphere appear more robust, but the small sample size of events and sparse rawinsonde locations prohibits quantitative discrimination of the forecasts. A qualitative evaluation of storm surrogate forecasts against storm reports (e.g., Sobash et al. 2011, 2016) also found qualitatively similar forecast guidance between CNTL and DROP.

While on average DROP shows a significant gain in skill relative to CNTL, the impact for several specific IOPs show neutral (±0.02 accumulated FSS difference from forecast hours 1–15) or even reduction in forecast skill (Fig. 16). The performance impact is consistent across the full range of precipitation intensity (e.g., Fig. 12b). The combination of positive and negative impact cases warrants comparison of characteristics across varying impact IOPs. Following is a summary of IOP1, which was a near-neutral impact event.

Fig. 16.
Fig. 16.

Mean FSS for each CNTL forecast for forecast hours 1–15 vs the accumulated difference between the DROP and CNTL forecast skill (DROP − CNTL) for a rain-rate threshold of 1 mm h−1 and a neighborhood radius of 50 km. The regression line is shown (solid blue line). IOPs examined in detail in this study are highlighted in red by date. Number labels indicate the IOP, with a full list of IOP dates provided in Table 3.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

c. A neutral impact IOP: 15 May 2015

IOP1 featured an upper low centered over the Texas Panhandle, with robust southwesterly flow across central Texas extending southwestward into northern Mexico (Fig. 6a). In situ observations outside of the CONUS are comparatively sparse, and as such, these areas may feature elevated analysis uncertainty regarding potentially embedded weak disturbances. Early morning precipitation associated with a lead disturbance contributed to additional uncertainty in the forecast evolution, particularly whether sufficient destabilization would occur in the wake of morning precipitation to foster development of severe convection across portions of north Texas later that day. ESA guidance indicated sensitivity to convective development in northwest and north-central Texas to the forecast of upstream midtropospheric state, which for this case included portions of Mexico that were well outside of the approved sampling area (Fig. 3a), as well as sensitivity to the upper low characteristics over west Texas (not shown). During the afternoon and evening of 15 May 2013, numerous discrete supercells developed in north Texas, including a deadly EF4 tornado that struck near Granbury, Texas.

In contrast to the seemingly random pattern noted in the neighboring analysis increments at observation sites in IOP4, IOP1 features increments with greater pattern consistency (Fig. 17), which implies larger-scale wind and temperature bias between the dropsonde observations and the analysis prior. For example, at 500 hPa the wind increments are nearly all northeasterly (Fig. 17b), while 500-hPa temperature increments are generally negative from dropsondes along the southeastern half of the sample area (Fig. 17b). At 700 hPa, temperature increments are nearly neutral yet the mean difference at the final analysis time is mostly an increased temperature adjustment over the same area (Fig. 17c). This implies inconsistent structural information in the analysis covariances when the bias was corrected by the dropsonde observations. For IOP1, the net impact of dropsonde assimilation on the analysis reveals both larger magnitude and spatial extent of temperature differences relative to IOP4. However, whereas in IOP4 differences between CNTL and DROP amplified with time (Fig. 8), in IOP1 these initial state differences in mean temperature at 500 hPa between DROP and CNTL diminish with longer integration (Fig. 18). The 700-hPa moisture evolution follows a pattern of relative moistening in the region where convection develops in north-central Texas (Fig. 19). Consistent with the dwindling temperature differences in the midtroposphere by the time convection initiates, dropsonde impact on precipitation forecasts is quite small. Both forecasts initiate convection too early and favor upscale convective organization toward the end of the forecast period, while verifying observations indicated more discrete convection (not shown). DROP forecasts show slightly improved forecast skill early but a modest degradation in the timing and location of forecast convection beyond 9 h, resulting in a qualitatively similar while overall skillful precipitation forecast (Fig. 16).

Fig. 17.
Fig. 17.

As in Fig. 7, but for IOP1.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 18.
Fig. 18.

As in Fig. 8, but for IOP1.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 19.
Fig. 19.

As in Fig. 9, but for IOP1.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

d. A negative impact IOP: 23 May 2015

IOP6 uniquely features both the poorest control forecast and largest magnitude degradation with the addition of dropsonde information (Fig. 16). This convective event featured modest upper-level flow with a weak disturbance rotating over a ridge axis across New Mexico that was expected to arrive in west Texas later in the day and aid in convective development (Fig. 6f). This midtropospheric disturbance was identified in ESA as well as other forecast guidance preceding the event, which motivated sampling upstream over much of New Mexico, despite most of the larger sensitivity area lying beyond the reach of GV sampling. At the surface, outflow from early morning convection over portions of Oklahoma reinforced southwestward advancement of a cold front into northwest Texas. As the event unfolded, initially discrete supercells developed near the intersection of the retreating front and dryline across west Texas and then quickly evolved upscale into a forward-propagating convective line that raced southward (not shown).

Analysis increments are mostly modest and indicate good overall agreement between the dropsondes and the analysis background state, aside from a trend toward increased midtropospheric flow and warming in the region around eastern New Mexico, particularly at 700 hPa (Fig. 20). In the 500-hPa forecast differences between the ensemble mean of DROP and CNTL, DROP forecasts feature an enhanced thermal ridge and anticyclonic flow leading the disturbance as revealed in the ensemble mean forecast differences (Figs. 21b,c), with growing disparity after the initial development of convection in the forecasts (Fig. 21). The moisture evolution at 700 hPa (Fig. 22) indicates drying beneath the area of enhanced ridging at 500 hPa followed by an east–west dipole in moisture associated with a westward shift in convective development in the DROP experiment (Figs. 22c,d). For both CNTL and DROP, precipitation forecasts are more skillful during initial convective development but poorly capture the observed system evolution toward an east–west-oriented line and aggressive southward motion of the convective complex. Instead, both sets of ensemble forecasts are in strong agreement for an elevated north–south-oriented line of convection farther north, with a delayed trend toward a more east–west orientation (not shown). The source of these forecast errors may not owe to upstream midtropospheric forcing and moisture, as convection was ongoing across central Oklahoma during the time forecasts were initialized, which would contribute toward errors in the strength and position of surface boundaries important for forcing the location of convective development. The impact of dropsondes for this IOP shifts the convective development westward, consistent with the enhanced lead ridging noted at midlevels, yet further from the evolution of the observed event.

Fig. 20.
Fig. 20.

As in Fig. 7, but for IOP6.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 21.
Fig. 21.

As in Fig. 8, but for IOP6.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

Fig. 22.
Fig. 22.

As in Fig. 9, but for IOP6.

Citation: Monthly Weather Review 144, 10; 10.1175/MWR-D-15-0407.1

4. Discussion

The results from assimilation of dropsonde observations on the mesoscale during MPEX largely mirror the same opportunities and challenges noted in prior observation targeting campaigns on synoptic scales. While the overall impact is positive, mixed performance is noted in individual forecast outcomes with the inclusion of dropsonde information (e.g., Fig. 16). Following is a discussion revisiting the M11 observation impact factors outlined in section 1, along with a consideration of MPEX results in relation to prior observation targeting studies.

a. Errors present in the background forecast

The background for this study is provided by a continuously cycled ensemble analysis. Thus, the forecast ensemble variance represents a component of the uncertainty in the background analysis that we hope to reduce through assimilation of dropsondes. Forecast variance identified in longer-range forecasts owing to initial condition uncertainty will often have been reduced through the assimilation of routine observations between the time targeting decisions are made and the time of interest for initial condition uncertainty reduction. It appears possible to generate observation targeting guidance that estimates the potential impact of future routine observations at fixed observing sites, but doing so would be quite difficult in practice (M11).

The use of hourly cycling versus the real-time 6-hourly cycling markedly reduced the errors in the background forecasts as evidenced by the significant improvement in the forecasts for both DROP and CNTL relative to the real-time forecasts (Fig. 15). The CNTL and DROP forecasts may have benefited from what could be considered very good background analyses, which may have made it particularly difficult to markedly improve forecasts through additional observation targeting (e.g., Bergot 1999; Szunyogh et al. 2000). Similarly, Buizza et al. (2007) found that targeted observations were quite effective in data-void regions, such as over oceanic regions, yet in data-rich regions the impact of targeted observations was particularly small. How few observations define a data-poor region with an optimal data assimilation system is not clear, particularly for mesoscale applications.

To the extent that errors remain in the analysis, it is entirely plausible the spatial scales of those errors are larger than those sampled by MPEX dropsondes. For example, Durran and Gingrich (2014) argue that even tiny errors, such as the magnitude often specified at the level of instrument error, on large spatial scales are at least equally important contributors to forecast error growth as much bigger errors at small scales, and these tiny synoptic-scale errors can quickly saturate the smallest resolved scales in the forecast model within just a few hours of model integration (Durran and Weyn 2016). Our current observing network is inadequate to discriminate synoptic errors at the magnitude of uncertainty noted by Durran and Gingrich (2014). It is also quite possible that dropsonde observations sampled mesoscale features, such as topographically induced waves, that were not well captured in the model background and appeared as noise in the analysis.

One possible interpretation of the results in Fig. 16 would be that the dominant source of forecast error is not owing to initial condition uncertainty on the mesoscale in the upstream midtroposphere where dropsondes were released. This draws from the observation that the magnitude of change in forecast skill between DROP and CNTL is small relative to the range in forecast skill between IOPs, and that gains in forecast skill with the assimilation of dropsondes appeared to be independent of the skill of the CNTL forecasts. Examination of errors common in both CNTL and DROP suggest that errors on larger spatial scales might have been present for several convective events, including IOPs 1 and 6, with ensemble precipitation forecasts strongly clustered around an incorrect forecast evolution. As shown by Torn and Romine (2015), initial condition uncertainty in large-scale features well removed from the convective event of interest can still strongly influence the forecast evolution of deep convection by impacting the location of surface boundaries and moisture availability. They further showed that, for at least some MPEX IOPs, uncertainty in the precipitation forecast also owed to upstream boundary layer thermodynamics that were not sampled by dropsondes. The local errors in the convective environment may have dominated the forecast errors, and were not sampled by MPEX dropsondes. Prior targeting studies have found the sampling area needs to cover the full spatial extent of the background errors, such that partial sampling of the error source regions was inadequate to improve forecasts (e.g., Bergot 1999; Cardinali and Buizza 2003; Buizza et al. 2007; Hamill et al. 2013). Moreover, Cardinali and Buizza (2003) recognized that the sensitive area may only be partially sampled or perhaps with all target observations in areas that were not sensitive to the forecast problem at all.

b. Errors in the observations

The dropsonde observations must be of sufficiently high quality to have a positive impact on the analysis state and subsequent forecasts. As shown in Fig. 4, the fit for dropsonde observations to the hourly analysis background indicates dropsondes have RMS errors similar to climatological errors in routine rawinsonde observations. A dry bias in moisture for dropsonde observations relative to the CNTL analysis background (Fig. 4b) may have contributed to the downward shift in bias with DROP precipitation forecasts, particularly at higher rain-rate thresholds (Fig. 12a). The background fit to rawinsondes also show a moist bias, thus the dropsondes likely constrain the CNTL analysis moist bias. The analysis RMS error fit for wind observations is similar to the mean fit for rawinsonde wind observations (Figs. 4c,d). However, a high wind speed bias in dropsonde observations was noted in the zonal component of the wind in the upper troposphere peaking above 1 m s−1 (Fig. 4c). Increased bias in the zonal wind component of about 0.4 m s−1 is found in verification statistics for 9-h forecasts of zonal wind against routine rawinsonde observations (not shown). Since there was at least some indication that undesirable bias may have been introduced through dropsonde zonal wind observations, an effort was made to discriminate this aspect by comparing dropsonde observations against nearby in space (within 50-km radius) and time (within 2 h) rawinsonde observations for all IOPs. Given the modest number of paired dropsonde and rawinsonde observations, the sample was insufficient to discriminate if the bias was significant at the 90% level.

c. Data assimilation and forecast methods

This study uses an ensemble data assimilation system and forecast system for both targeting and observation impact assessment, consistent with recommendations of M11. The ensemble data assimilation system fully utilizes flow-dependent background error covariances and thus makes better use of sparse targeted observations in data-void regions than variational approaches (e.g., Kelly et al. 2007). Since mesoscale errors were expected to overlay on synoptic patterns, guidance most often identified areas where mesoscale disturbances might reside within the larger-scale pattern. Further, MPEX was a constrained targeting experiment in the sense that the aircraft was not deployed to explicitly sample only sensitive regions. The authorized area for MPEX sampling limited where observations could be collected. Further, broader areas were sampled in recognition that areas of initial condition uncertainty may differ from sensitivity guidance drawn from longer-range predictions.

Regarding the forecast methods, 30-member ensemble forecasts were initialized from each set of ensemble analyses, with and without dropsondes assimilated. It was found that the data assimilation system response to the added dropsonde observations, which typically concluded a few hours before the final analysis time used for initial conditions, led to a small increase in the initial condition uncertainty (ensemble spread) owing to the response of the adaptive inflation algorithm within DART. While this actually led to enhanced forecast uncertainty on average, within this forecast system this actually proved beneficial since CNTL forecasts tended to be overconfident (Fig. 14). Initial differences from the cumulative impact of assimilating dropsondes smoothed out to larger scales with increasing lead time as the initially correlated noise dispersed within each ensemble set and the differences in ensemble mean forecasts trended toward mesoscale structures during the lead times considered (focus on 9-h lead). The use of ensemble forecasts leverages the value of verifying forecast probabilities instead of deterministic outcomes, and for the MPEX IOPs this forecast approach yields benefits from the dropsonde observations in increased skill, particularly at higher rain rates, as well as increased forecast reliability at higher forecast probabilities.

d. MPEX dropsonde impact in the context of prior observation impact studies

Summarizing the observation impact factors from an MPEX perspective, the errors in the background analysis and subsequent analysis were not fully known, particularly at larger spatial scales. A broad interpretation of the flow regimes and sampling areas for dropsondes (Fig. 6) indicates a wide range of horizontal scales in the disturbances sampled by MPEX. The most improved forecast among all IOPs was IOP2 (Fig. 6b), which was a compact disturbance with dropsondes canvasing the entire disturbance as well as the surrounding steering flow. Meanwhile, the least successful forecast and most degraded dropsonde impact featured an upper ridge, centered in data-sparse Mexico, with only partial sampling of the steering flow and an embedded weak disturbance (Figs. 3a, 6f).

Prior observation studies, primarily focused on synoptic-scale systems and tropical cyclones over data-sparse regions, have typically found positive, yet modest, impact on forecasts (e.g., Burpee et al. 1996; Bergot 1999; Szunyogh et al. 2000; Aberson 2003; Buizza et al. 2007). The challenges faced during MPEX operations were not unique, but in fact common to previously documented issues in prior observation targeting campaigns such as being reliant on forecasts to estimate when and where to sample (e.g., M11). Many of these prior targeting campaigns directly employed specific tools (e.g., singular vectors; Buizza et al. 2007) for the identification of source regions of forecast variance. The appropriateness of these approaches for convective weather forecast outcomes is less clear (e.g., Gilmour et al. 2001), as these prior tools were developed for application in synoptic systems where temperature and wind structures are largely balanced (e.g., geostrophic balance) and a linear error approximation is valid for a longer window of time. MPEX was also highly constrained in the spatial extent, distance between drop sites, sampling window, and the lead time in mission planning needed for aircraft fueling and approval of flight plans. Regarding verification, available observations were limited in spatial density making it difficult to discriminate meso-β-scale to storm-scale forecast impacts. Precipitation was found to best discriminate the forecast differences, with limited or no signal evident in surface synoptic and mesonet observations. Potentially larger differences were evident in the midtroposphere, but verification observation spacing led to insufficient sampling to discriminate forecast differences.

5. Summary

This study examines the impact of assimilating dropsonde observations on analyses and subsequent forecasts of convective weather events sampled during the MPEX field campaign. MPEX included 15 IOPs where 20–30 dropsondes were released in the vicinity of disturbances during the morning hours upstream of anticipated convective weather events. During MPEX, IOPs were selected when forecasts indicated mesoscale disturbances would pass through the MPEX region and were associated with future forecast variance in precipitation. This forecast variance of precipitation, namely the timing, location, and intensity of potentially severe convective weather events, was associated through ensemble sensitivity analysis with uncertainty in the structure of the upstream disturbance. Thus, the identified disturbance had potential for initial condition errors in future forecasts and warranted observation targeting to reduce future forecast uncertainty. Routine observations were most available at the surface, east of the continental divide, and around the flight level for commercial aircraft (Figs. 3b, 4). Thus, routine observations were less common in the MPEX region, particularly in the midtroposphere. Dropsondes helped fill this data void within the midtroposphere and over the western portions of the MPEX domain. The control experiment was used to test the fit of the overall quality of dropsonde observations and found RMS errors similar in magnitude with climatological errors of rawinsonde observations, except for temperature near the surface, a larger than expected bias for midtropospheric zonal wind, and a dry bias in humidity (Fig. 4).

A comparison of 30-member ensemble forecasts initialized from ensemble analyses with (DROP) and without dropsonde observations (CNTL) revealed a modest but statistically significant improvement in forecast skill of precipitation with dropsonde information (Figs. 12b, 13, 15), along with reduced forecast bias for precipitation (Fig. 12a) and improved reliability (Fig. 14). This study also revealed variability in forecast impacts from case to case (Fig. 16), a result consistent with prior targeted observation impact studies (e.g., Buizza et al. 2007). The improvement in the DROP forecasts was uncorrelated with the skill of the CNTL forecast (Fig. 16), which implies the dominant source of error in these convective weather forecasts may not have been due to mesoscale errors in upstream midtropospheric disturbances.

These results were considered in the context of the recommendations for observation impact outlined by M11. It is still possible that the dominant source for forecast errors originate from initial condition uncertainty, but the dropsonde observations may not have sampled enough of the sensitive areas to consistently improve the analysis (e.g., Torn and Romine 2015). Or, the subsequent analysis cycles before, during, and after the dropsondes were assimilated might have addressed much of the initial condition uncertainty through the assimilation of routine observations. Thus, the operational observing network may have been sufficiently data rich to largely remove the initial condition uncertainty and associated forecast error sensitivity for many of the MPEX IOPs. Along this line, the use of ESA with longer-range forecasts may not have provided sufficiently helpful targeting guidance since this approach assumed no additional information would be gained by the analysis system before the sampling window, as well as a linear approximation in the estimation of the sensitivity which may not hold in longer-range forecasts and especially for mesoscale disturbances (e.g., Gilmour et al. 2001). Contrasting the most versus least effective dropsonde impact forecasts, one factor that emerged was the spatial scale of the disturbance tied to the precipitation event. Several prior studies have noted partially sampling sensitive regions often led to degraded analysis and forecasts (e.g., Bergot 1999; Cardinali and Buizza 2003; Buizza et al. 2007; Hamill et al. 2013). Extrapolating the result of Gelaro et al. (2010), an alternative approach to reduce forecast errors would be to use many more observations. With a modest number of observations that on average are only slightly more likely to reduce forecast error than increase it, assimilating a larger number of supplemental observations would increase the likelihood to reduce forecast errors. Finally, the data assimilation system used in the study may have been suboptimal, limiting the impact of dropsonde observations.

Looking forward, the spatial scales of the sampling target may need to be dynamic based on the scale of the disturbance of interest. Another important aspect to investigate would be the role of initial condition versus model error, where for the predictability of particular weather events one aspect may dominate over the other. Future efforts may also want to estimate observation information that will become available between the time of target guidance initialization and the period of supplemental observation collection in a way that better estimates the later information gain of the analysis system to more accurately identify where initial condition uncertainty might remain, as suggested by M11. Also, whether purely random pseudo-observations in the same locations would engender an equal or greater impact on forecasts would be an interesting and valuable investigation to conduct in the future.

Acknowledgments

Chris Snyder and Jeff Anderson, both of NCAR, contributed toward helpful discussion regarding early results. NSF provided funding for the MPEX field campaign, in particular for the use of the GV aircraft and expended dropsondes that provided the observations needed for this study. We appreciate the efforts of the entire MPEX team that allowed for the collection of observations used in this study. We greatly appreciate the suggestions provided by Sharan Majumdar as well as two anonymous reviewers that precipitated several improvements. NSF Grant 1239787 provided support for coauthor Ryan Torn. Kate Young of NCAR EOL quality control processed the dropsonde observations. Dave Ahijevych provided results comparing dropsonde observations to nearby rawinsondes. We would like to acknowledge high-performance computing support from Yellowstone (ark:/85065/d7wd3xhc) provided by NCAR’s Computational and Information Systems Laboratory sponsored by the National Science Foundation.

REFERENCES

  • Aberson, S. D., 2003: Targeted observations to improve operational tropical cyclone track forecast guidance. Mon. Wea. Rev., 131, 16131628, doi:10.1175//2550.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B., and G. J. Hakim, 2007: Comparing adjoint- and ensemble-sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 41174134, doi:10.1175/2007MWR1904.1.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61A, 7283, doi:10.1111/j.1600-0870.2008.00361.x.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Arellano, 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 12831296, doi:10.1175/2009BAMS2618.1.

    • Search Google Scholar
    • Export Citation
  • Baker, N., and R. Daley, 2000: Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 14311454, doi:10.1002/qj.49712656511.

    • Search Google Scholar
    • Export Citation
  • Barker, D., and Coauthors, 2012: The Weather Research and Forecasting Model’s Community Variational/Ensemble Data Assimilation System: WRFDA. Bull. Amer. Meteor. Soc., 93, 831843, doi:10.1175/BAMS-D-11-00167.1.

    • Search Google Scholar
    • Export Citation
  • Bergot, T., 1999: Adaptive observations during FASTEX: A systematic survey of upstream flights. Quart. J. Roy. Meteor. Soc., 125, 32713298, doi:10.1002/qj.49712556108.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., C. Cardinali, G. Kelly, and J.-N. Thépaut, 2007: The value of observations. II: The value of observations located in singular-vector-based target areas. Quart. J. Roy. Meteor. Soc., 133, 18171832, doi:10.1002/qj.149.

    • Search Google Scholar
    • Export Citation
  • Burpee, R. W., S. D. Aberson, J. L. Franklin, S. J. Lord, and R. E. Tuleya, 1996: The impact of omega dropwindsondes on operational hurricane track forecast models. Bull. Amer. Meteor. Soc., 77, 925933, doi:10.1175/1520-0477(1996)077<0925:TIOODO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Cardinali, C., and R. Buizza, 2003: Forecast skill of targeted observations: A singular-vector-based diagnostic. J. Atmos. Sci., 60, 19271940, doi:10.1175/1520-0469(2003)060<1927:FSOTOA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model implementation and sensitivity. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., W. A. Gallus Jr., M. Xue, and F. Kong, 2010: Convection-allowing and convection-parameterizing ensemble forecasts of a mesoscale convective vortex and associated severe weather environment. Wea. Forecasting, 25, 10521081, doi:10.1175/2010WAF2222390.1.

    • Search Google Scholar
    • Export Citation
  • Durran, D. R., and M. Gingrich, 2014: Atmospheric predictability: Why butterflies are not of practical importance. J. Atmos. Sci., 71, 24762488, doi:10.1175/JAS-D-14-0007.1.

    • Search Google Scholar
    • Export Citation
  • Durran, D. R., and J. A. Weyn, 2016: Thunderstorms do not get butterflies. Bull. Amer. Meteor. Soc., 97, 237243, doi:10.1175/BAMS-D-15-00070.1.

    • Search Google Scholar
    • Export Citation
  • Garcies, L., and V. Homar, 2014: Are current sensitivity products sufficiently informative in targeting campaigns? A DTS-MEDEX-2009 case study. Quart. J. Roy. Meteor. Soc., 140, 525538, doi:10.1002/qj.2148.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., R. H. Langland, S. Pellerin, and R. Todling, 2010: The THORPEX observation impact intercomparison experiment. Mon. Wea. Rev., 138, 40094025, doi:10.1175/2010MWR3393.1.

    • Search Google Scholar
    • Export Citation
  • Gilmour, I., L. Smith, and R. Buizza, 2001: On the duration of the linear regime: Is 24 hours a long time in weather forecasting? J. Atmos. Sci., 58, 35253539, doi:10.1175/1520-0469(2001)058<3525:LRDIHA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155167, doi:10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., F. Yang, C. Cardinali, and S. J. Majumdar, 2013: Impact of targeted winter storm reconnaissance dropwindsonde data on midlatitude numerical weather predictions. Mon. Wea. Rev., 141, 20582065, doi:10.1175/MWR-D-12-00309.1.

    • Search Google Scholar
    • Export Citation
  • Hock, T. F., and J. L. Franklin, 1999: The NCAR GPS dropwindsonde. Bull. Amer. Meteor. Soc., 80, 407420, doi:10.1175/1520-0477(1999)080<0407:TNGD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, doi:10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1994: The step-mountain Eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927945, doi:10.1175/1520-0493(1994)122<0927:TSMECM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp.

  • Joly, A., and Coauthors, 1999: Overview of the field phase of the Fronts and Atlantic Storm-Track Experiment (FASTEX) project. Quart. J. Roy. Meteor. Soc., 125, 31313163, doi:10.1002/qj.49712556103.

    • Search Google Scholar
    • Export Citation
  • Kelly, G., J.-N. Thépaut, R. Buizza, and C. Cardinali, 2007: The value of observations. I: Data denial experiments for the Atlantic and the Pacific. Quart. J. Roy. Meteor. Soc., 133, 18031815, doi:10.1002/qj.150.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., 2005: Issues in targeted observing. Quart. J. Roy. Meteor. Soc., 131, 34093425, doi:10.1256/qj.05.130.

  • Langland, R. H., and N. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189201, doi:10.1111/j.1600-0870.2004.00056.x.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and Coauthors, 1999: The North Pacific Experiment (NORPEX-98): Targeted observations for improved North American weather forecasts. Bull. Amer. Meteor. Soc., 80, 13631384, doi:10.1175/1520-0477(1999)080<1363:TNPENT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., and K. E. Mitchell, 2005: The NCEP Stage II/IV hourly precipitation analyses: Development and applications. 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2. [Available online at http://ams.confex.com/ams/pdfpapers/83847.pdf.]

  • Liu, J., and E. Kalnay, 2008: Estimating observation impact without adjoint model in an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 134, 13271335, doi:10.1002/qj.280.

    • Search Google Scholar
    • Export Citation
  • Lorenc, A. C., and R. T. Marriott, 2014: Forecast sensitivity to observations in the Met Office global numerical weather prediction system. Quart. J. Roy. Meteor. Soc., 140, 209224, doi:10.1002/qj.2122.

    • Search Google Scholar
    • Export Citation
  • Majumdar, S. J., C. H. Bishop, B. J. Etherton, I. Szunyogh, and Z. Toth, 2001: Can an ensemble transform Kalman filter predict the reduction in forecast error variance produced by targeted observations? Quart. J. Roy. Meteor. Soc., 127, 28032820, doi:10.1002/qj.49712757815.

    • Search Google Scholar
    • Export Citation
  • Majumdar, S. J., C. H. Bishop, B. J. Etherton, and Z. Toth, 2002a: Adaptive sampling with the ensemble transform Kalman filter. Part II: Field program implementation. Mon. Wea. Rev., 130, 13561369, doi:10.1175/1520-0493(2002)130<1356:ASWTET>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Majumdar, S. J., C. H. Bishop, R. Buizza, and R. Gelaro, 2002b: A comparison of ensemble-transform Kalman-filter targeting guidance with ECMWF and NRL total-energy singular-vector guidance. Quart. J. Roy. Meteor. Soc., 128, 25272549, doi:10.1256/qj.01.214.

    • Search Google Scholar
    • Export Citation
  • Majumdar, S. J., and Coauthors, 2011: Targeted observations for improving numerical weather prediction: An overview. World Weather Research Programme/THORPEX Publ. 15, 45 pp. [Available online at http://www.wmo.int/pages/prog/arep/wwrp/new/documents/THORPEX_No_15.pdf.]

  • Martin, C., 2007: ASPEN (Atmospheric Sounding Processing Environment) user manual. NCAR, Boulder, CO, 61 pp. [Available online at https://www.eol.ucar.edu/system/files/Aspen%2520Manual.pdf.]

  • Mason, S. J., and N. E. Graham, 2002: Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation. Quart. J. Roy. Meteor. Soc., 128, 21452166, doi:10.1256/003590002320603584.

    • Search Google Scholar
    • Export Citation
  • Mellor, G. L., and T. Yamada, 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys., 20, 851875, doi:10.1029/RG020i004p00851.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough, 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 66316 682, doi:10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Parsons, D. B., and Coauthors, 2016: THORPEX Research and the Science of Prediction. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-14-00025.1, in press.

    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, doi:10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and Coauthors, 2010: Toward improved convection-allowing ensembles: Model physics sensitivities and optimizing probabilistic guidance with small ensemble membership. Wea. Forecasting, 25, 263280, doi:10.1175/2009WAF2222267.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. Romine, K. Smith, and M. Weisman, 2014: Characterizing and optimizing precipitation forecasts from a convection-permitting ensemble initialized by a mesoscale ensemble Kalman filter. Wea. Forecasting, 29, 12951318, doi:10.1175/WAF-D-13-00145.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, M. L. Weisman, R. A. Sobash, K. R. Fossell, K. W. Manning, and S. B. Trier, 2015: A real-time convection-allowing ensemble prediction system initialized by mesoscale ensemble Kalman filter analyses. Wea. Forecasting, 30, 11581181, doi:10.1175/WAF-D-15-0013.1.

    • Search Google Scholar
    • Export Citation
  • Sellwood, K. J., S. J. Majumdar, B. E. Mapes, and I. Szunyogh, 2008: Predicting the influence of observations on medium-range forecasts of atmospheric flow. Quart. J. Roy. Meteor. Soc., 134, 20112027, doi:10.1002/qj.341.

    • Search Google Scholar
    • Export Citation
  • Shapiro, M. A., and A. J. Thorpe, 2004: THORPEX international science plan, version III. World Meteorological Organization Tech. Doc. 1246, 51 pp.

  • Skamarock, W. C., and M. L. Weisman, 2009: The impact of positive-definite moisture transport on NWP precipitation forecasts. Mon. Wea. Rev., 137, 488494, doi:10.1175/2008MWR2583.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

  • Sobash, R. A., J. S. Kain, D. R. Bright, A. R. Dean, M. C. Coniglio, and S. J. Weiss, 2011: Probabilistic forecast guidance for severe thunderstorms based on the identification of extreme phenomena in convection-allowing model forecasts. Wea. Forecasting, 26, 714728, doi:10.1175/WAF-D-10-05046.1.

    • Search Google Scholar
    • Export Citation
  • Sobash, R. A., C. S. Schwartz, G. S. Romine, K. Fossell, and M. Weisman, 2016: Severe weather prediction using storm surrogates from an ensemble forecasting system. Wea. Forecasting, 31, 255271, doi:10.1175/WAF-D-15-0138.1.

    • Search Google Scholar
    • Export Citation
  • Sommer, M., and M. Weissmann, 2014: Observation impact in a convective-scale localized ensemble transform Kalman filter. Quart. J. Roy. Meteor. Soc., 140, 26722679, doi:10.1002/qj.2343.

    • Search Google Scholar
    • Export Citation
  • Szunyogh, I., Z. Toth, R. E. Morss, S. J. Majumdar, B. J. Etherton, and C. H. Bishop, 2000: The effect of targeted dropsonde observations during the 1999 Winter Storm Reconnaissance Program. Mon. Wea. Rev., 128, 35203537, doi:10.1175/1520-0493(2000)128<3520:TEOTDO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Tegen, I., P. Hollrig, M. Chin, I. Fung, D. Jacob, and J. Penner, 1997: Contribution of different aerosol species to the global aerosol extinction optical thickness: Estimates from model results. J. Geophys. Res., 102, 23 89523 915, doi:10.1029/97JD01864.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., P. R. Field, R. M. Rasmussen, and W. D. Hall, 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, doi:10.1175/2008MWR2387.1.

    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., 2014: The impact of targeted dropwindsonde observations on tropical cyclone intensity forecasts of four weak systems during PREDICT. Mon. Wea. Rev., 142, 28602878, doi:10.1175/MWR-D-13-00284.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2008: Performance characteristics of a pseudo-operational ensemble Kalman filter. Mon. Wea. Rev., 136, 39473963, doi:10.1175/2008MWR2443.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. S. Romine, 2015: Sensitivity of central Oklahoma convection forecasts to upstream potential vorticity anomalies during two strongly forced cases during MPEX. Mon. Wea. Rev., 143, 40644087, doi:10.1175/MWR-D-15-0085.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., G. J. Hakim, and C. Snyder, 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., C. A. Davis, W. Wang, K. W. Manning, and J. B. Klemp, 2008: Experiences with 0–36 h explicit convective forecasts with the WRF-ARW model. Wea. Forecasting, 23, 407437, doi:10.1175/2007WAF2007005.1.

    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., and Coauthors, 2015: The Mesoscale Predictability Experiment (MPEX). Bull. Amer. Meteor. Soc., 96, 21272149, doi:10.1175/BAMS-D-13-00281.1.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and T. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, doi:10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 1997: Resampling hypothesis tests for autocorrelated fields. J. Climate, 10, 6582, doi:10.1175/1520-0442(1997)010<0065:RHTFAF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. Academic Press, 627 pp.

  • Zhang, C., Y. Wang, and K. Hamilton, 2011: Improved representation of boundary layer clouds over the southeast Pacific in ARW-WRF using a modified Tiedtke cumulus parameterization scheme. Mon. Wea. Rev., 139, 34893513, doi:10.1175/MWR-D-10-05091.1.

    • Search Google Scholar
    • Export Citation
  • Zhu, Y., and R. Gelaro, 2008: Observation sensitivity calculations using the adjoint of the Gridpoint Statistical Interpolation (GSI) analysis system. Mon. Wea. Rev., 136, 335351, doi:10.1175/MWR3525.1.

    • Search Google Scholar
    • Export Citation
1

THORPEX, which existed between 2005 and 2014 under the auspices of the WMO, was “A World Weather Research Program accelerating improvements in the accuracy of one day to two week high-impact weather forecasts for the benefit of society, the economy and the environment” (Shapiro and Thorpe 2004).

2

Surface height (pressure) varies considerably among dropsonde locations (see Fig. 3a).

Save
  • Aberson, S. D., 2003: Targeted observations to improve operational tropical cyclone track forecast guidance. Mon. Wea. Rev., 131, 16131628, doi:10.1175//2550.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B., and G. J. Hakim, 2007: Comparing adjoint- and ensemble-sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 41174134, doi:10.1175/2007MWR1904.1.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61A, 7283, doi:10.1111/j.1600-0870.2008.00361.x.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Arellano, 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 12831296, doi:10.1175/2009BAMS2618.1.

    • Search Google Scholar
    • Export Citation
  • Baker, N., and R. Daley, 2000: Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 14311454, doi:10.1002/qj.49712656511.

    • Search Google Scholar
    • Export Citation
  • Barker, D., and Coauthors, 2012: The Weather Research and Forecasting Model’s Community Variational/Ensemble Data Assimilation System: WRFDA. Bull. Amer. Meteor. Soc., 93, 831843, doi:10.1175/BAMS-D-11-00167.1.

    • Search Google Scholar
    • Export Citation
  • Bergot, T., 1999: Adaptive observations during FASTEX: A systematic survey of upstream flights. Quart. J. Roy. Meteor. Soc., 125, 32713298, doi:10.1002/qj.49712556108.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., C. Cardinali, G. Kelly, and J.-N. Thépaut, 2007: The value of observations. II: The value of observations located in singular-vector-based target areas. Quart. J. Roy. Meteor. Soc., 133, 18171832, doi:10.1002/qj.149.

    • Search Google Scholar
    • Export Citation
  • Burpee, R. W., S. D. Aberson, J. L. Franklin, S. J. Lord, and R. E. Tuleya, 1996: The impact of omega dropwindsondes on operational hurricane track forecast models. Bull. Amer. Meteor. Soc., 77, 925933, doi:10.1175/1520-0477(1996)077<0925:TIOODO>2.0.CO;2.