Improving High-Impact Forecasts through Sensitivity-Based Ensemble Subsets: Demonstration and Initial Tests

Brian C. Ancell Texas Tech University, Lubbock, Texas

Search for other papers by Brian C. Ancell in
Current site
Google Scholar
PubMed
Close
Full access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

Ensemble sensitivity can reveal features in the flow at early forecast times that are relevant to the predictability of chosen high-impact forecast aspects (e.g., heavy precipitation) later in time. In an operational environment, it thus might be possible to choose ensemble subsets with improved predictability over the full ensemble if members with the smallest errors in regions of large ensemble sensitivity can be identified. Since numerous observations become available hourly, such a technique is feasible and could be executed well before the next assimilation/extended forecast cycle, potentially adding valuable lead time to forecasts of high-impact weather events. Here, a sensitivity-based technique that chooses subsets of forecasts initialized from an 80-member ensemble Kalman filter (EnKF) is tested by ranking 6-h errors in sensitive regions toward improving 24-h forecasts of landfalling midlatitude cyclones on the west coast of North America. The technique is first tested within an idealized framework with one of the ensemble members serving as truth. Subsequent experiments are performed in more realistic scenarios with an independent truth run, observation error added, and sparser observations. Results show the technique can indeed produce ensemble subsets that are improved relative to the full ensemble for 24-h forecasts of landfalling cyclones. Forecast errors are found to be smallest when the greatest 70% of ensemble sensitivity magnitudes with subsets of size 5–30 members are used, as well as when only the cases of the largest forecast spread are considered. Finally, this study presents considerations for extending this technique into fully realistic situations with regard to additional high-impact events.

Corresponding author address: Brian C. Ancell, Texas Tech University, Dept. of Geosciences, Box 41053, Lubbock, TX 79409. E-mail: brian.ancell@ttu.edu

Abstract

Ensemble sensitivity can reveal features in the flow at early forecast times that are relevant to the predictability of chosen high-impact forecast aspects (e.g., heavy precipitation) later in time. In an operational environment, it thus might be possible to choose ensemble subsets with improved predictability over the full ensemble if members with the smallest errors in regions of large ensemble sensitivity can be identified. Since numerous observations become available hourly, such a technique is feasible and could be executed well before the next assimilation/extended forecast cycle, potentially adding valuable lead time to forecasts of high-impact weather events. Here, a sensitivity-based technique that chooses subsets of forecasts initialized from an 80-member ensemble Kalman filter (EnKF) is tested by ranking 6-h errors in sensitive regions toward improving 24-h forecasts of landfalling midlatitude cyclones on the west coast of North America. The technique is first tested within an idealized framework with one of the ensemble members serving as truth. Subsequent experiments are performed in more realistic scenarios with an independent truth run, observation error added, and sparser observations. Results show the technique can indeed produce ensemble subsets that are improved relative to the full ensemble for 24-h forecasts of landfalling cyclones. Forecast errors are found to be smallest when the greatest 70% of ensemble sensitivity magnitudes with subsets of size 5–30 members are used, as well as when only the cases of the largest forecast spread are considered. Finally, this study presents considerations for extending this technique into fully realistic situations with regard to additional high-impact events.

Corresponding author address: Brian C. Ancell, Texas Tech University, Dept. of Geosciences, Box 41053, Lubbock, TX 79409. E-mail: brian.ancell@ttu.edu

1. Introduction

Mesoscale ensemble prediction is now becoming a reality for operational forecasting guidance. While the need to choose between larger ensembles or higher resolution was common in the recent past, modern computing resources now allow real-time integration of sizeable ensembles (perhaps 25–50 members) at convection-permitting horizontal grid spacing (a few km) over regional domains. One example of such a system is the Texas Tech University (TTU) Weather Research and Forecasting (WRF) Model ensemble Kalman filter (EnKF; Evensen 1994), which nests down to 4-km resolution with 42 ensemble members to provide 48-h forecasts twice daily (data assimilation occurs on a 6-h cycle). The constraint of running extended forecasts only every 12 h is primarily a result of the large computational expense of integrating all ensemble forecasts quickly enough for real-time use, and is generally representative of the operational capability today and likely in the foreseeable future.

The unfortunate need for long periods of time between extended forecast initializations ultimately means that high-impact forecasts wait equally as long to be updated with new observational information (this issue is summarized in Fig. 1). While it is computationally prohibitive to update the extended forecasts more frequently through data assimilation and model integration, substantial extended forecast value might be realized if the idle observations shown in Fig. 1 could be incorporated into the forecast without the need for full data assimilation and subsequent forecasts. Previous studies have explored the potential benefits of this type of approach, some using statistical relationships across time in the ensemble to update forecasts in lieu of performing traditional data assimilation and new model runs. Etherton (2007) used a barotropic vorticity model to show that preemptive forecasts, or updated 48-h forecasts based on observations at 24-h forecast time, were better than the original 48-h forecasts and nearly as good as forecasts run after assimilating the observations themselves at 24-h forecast time. Madaus and Hakim (2015) extended this technique (termed ensemble forecast adjustment) to a more operational framework within ensemble systems using both European Center for Medium-Range Weather Forecasts (ECMWF) and Canadian Meteorological Centre (CMC) global models. By adjusting 12–30-h forecasts based on surface pressure observations at 6-h forecast time, they found statistically significant error reductions throughout the forecast window. Dong and Zhang (2016) found that subsets of ensemble forecasts with a closer fit to analyzed tropical cyclone tracks produced better predictions out to 5-day forecast time than the full ensemble. Each of these studies suggest that computationally cheap operational ensemble-based techniques can be developed that use observations in between extended forecast initializations to improve forecasts.

Fig. 1.
Fig. 1.

Illustration of the typical timing constraints of a system that employs 6-h assimilation cycling with 36-h extended forecasts.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

More generally, a number of observation sensitivity and targeting studies have shown that specific observations identified through some form of forecast sensitivity can provide substantial benefits with regard to forecast skill. Early studies by Baker and Daley (2000) and Langland and Baker (2004) showed that observation sensitivity, a quantity based on the adjoint of both the data assimilation and modeling system, can quantify which assimilated observations have the most impact on a chosen forecast metric. Later studies expanded this technique, and further accounted for nonlinearity associated with the adjoint formulation (Tremolet 2008; Gelaro and Zhu 2009; Cardinali 2009; Gelaro et al. 2010). Several targeting studies have been performed using singular vectors or adjoint sensitivity (Buizza and Montani 1999; Gelaro et al. 1999; Langland et al. 1999; Liu and Zou 2001) or ensemble techniques (Bishop et al. 2001; Liu and Kalnay 2008) that show varying degrees of forecast skill improvement attributed to the assimilation of targeted observations. The underlying theme of these studies is the ability to estimate where observations are most likely to benefit forecasts using dynamical forecast sensitivity, the properties of the data assimilation procedure, or both.

Here, an approach is introduced that takes advantage of the ability of cross-time ensemble covariances to highlight specific weather features that are most relevant to the predictability of high-impact aspects of extended forecasts. Specifically formulating the statistical relationships between high-impact extended forecast features (e.g., depth of a midlatitude cyclone at 36 h) and the atmospheric state at an earlier time (e.g., 500-hPa geopotential height at 6 h) is known as ensemble sensitivity analysis (ESA; Hakim and Torn 2008; Ancell and Hakim 2007; Torn and Hakim 2008a). ESA has been successfully applied to a variety of high-impact forecast problems including landfalling midlatitude cyclones (Ancell and McMurdie 2013; McMurdie and Ancell 2014), wind power applications (Zack et al. 2010a,b,c), tropical disturbances and extratropical transitions (Torn and Hakim 2009; Torn 2010), and convection (Hanley et al. 2013; Bednarczyk and Ancell 2015; Hill et al. 2013; Torn and Romine 2015). Since ESA has been shown to be successful in identifying features related to the predictability of such events, it is reasonable to expect ensemble members with the smallest errors in sensitive regions early in the forecast window to most accurately forecast the event. The goal of this study is to develop and test the use of ESA to choose ensemble members early in a forecast window that are most likely to provide the best forecasts of specific, high-impact events. Retaining the best ensemble members with this technique may be advantageous over adjusting the mean forecast (as in Madaus and Hakim 2015) in highly nonlinear situations (e.g., severe convection) where the mean can diverge from the attractor and become unrealistic (discussed in Ancell 2013). Operationally, errors could be calculated as observations become available at early forecast times, and sensitivity-weighted errors could provide ensemble subsets that may improve the predictability of severe weather over that of the original ensemble. Since this process can generally be achieved hours prior to the next extended forecast cycle that captures the event, it has the potential to add valuable lead time to forecasts of significant weather.

A number of practical considerations must be satisfied for this technique to be effective: 1) ensemble spread should exceed observational error variance to the degree where comparisons between observations and ensemble members are meaningful, 2) observations must exist with sufficient frequency in sensitive areas, 3) nonlinear ensemble perturbation evolution cannot be large enough to render ensemble sensitivity sufficiently inaccurate (discussed in Ancell and Hakim 2007), and 4) model error must play a small enough role to not produce too much error in ensemble sensitivity fields. While these considerations are obviously important to the operational application of the proposed technique, it is useless to address them unless sensitivity can be shown to effectively improve predictability within a simple framework without such complicating factors. In turn, this study first aims to demonstrate whether ensemble sensitivity can effectively be used within an idealized framework to produce ensemble subsets that provide improved forecasts of midlatitude landfalling cyclones over the full ensemble. Additional experiments are performed to account for the realities of observational error, a nature run that is independent of the ensemble, and sparse observations. This paper is organized as follows. Section 2 provides a background on ensemble sensitivity and the methodology behind the proposed technique, section 3 gives results and discussion, and the summary and conclusions are provided in section 4.

2. Background and methodology

a. Ensemble sensitivity background

Ensemble sensitivity was first described in Hakim and Torn (2008) and further explored in Ancell and Hakim (2007) and Torn and Hakim (2008a). Essentially, a chosen forecast aspect (referred herein as a scalar response function R) is linearly regressed onto all model state variables within an ensemble of forecasts at either the forecast time of the response function or at any earlier forecast time (including the initial conditions). The slope of these linear regressions at a given time is the N × 1 ensemble sensitivity vector ∂R/∂x, where x represents the model state with state-space dimension N. While a univariate linear regression within a multivariate system would seemingly have its limitations, ensemble sensitivities possess a deep dynamical meaning. Ancell and Hakim (2007) show how ensemble sensitivity values represent the change in the response function from a perturbation at not only a single point, but at every point and model variable that covaries (covariances calculated within the ensemble) with that single point. This inclusion of covariances is the sole difference between the ensemble and adjoint sensitivities [adjoint sensitivity is described in Errico (1997)]. In turn, ensemble sensitivities effectively pick out coherent features in the flow (e.g., upper-level geopotential height troughs, or midlevel temperature gradients) that have dynamical relevance to the response function.

The features shown to be most sensitive are those that must be accurate at early forecast times to well predict the chosen response function later in the forecast window. Figures 24 show an example of a 24-h forecast of a deepening cyclone (initialized 0000 UTC 19 November 2009) soon to make landfall on the west coast of North America and the ensemble sensitivity field associated with this cyclone. Such cyclones are high-impact phenomena for the Pacific Northwest region of the United States, potentially bringing strong winds and heavy precipitation to the area (Mass and Dotson 2010). At the initial time, a 500-hPa vorticity maximum (indicated by the black arrow in Fig. 2) propagated toward the southeast, moving in over a preexisting sea level pressure trough (shown by the dashed black line in Fig. 3) along a baroclinic zone. This sea level pressure trough extended southwest from an area of low pressure in the Gulf of Alaska, and subsequent cyclogenesis occurred in the form of a frontal wave over the next 24 h. Figures 2 and 3 depict this development, clearly showing an amplifying 500-hPa geopotential height trough (Fig. 2) just upstream of a surface cyclone that deepens about 15 hPa in this time. Figure 4 shows the ensemble sensitivity of 24-h forecast sea level pressure surrounding the ensemble mean cyclone (averaged over a 216 km × 216 km box surrounding the cyclone center) to 0-h sea level pressure. While we do not attempt an in-depth interpretation of sensitivity fields in this study, it is clear the depth of the incipient trough along 150°W, the depth of the trough extending southwest from the area of low sea level pressure near 50°N and 130°W, and the ridge of high pressure near 120°W south of 40°N are all relevant at 0 h to the prediction of the 24-h forecast landfalling cyclone shown in Fig. 3. Similar cyclones are tested in this study to determine whether sensitive regions like these can be used to identify the best-performing ensemble members later in the forecast period.

Fig. 2.
Fig. 2.

The 0-, 12-, and 24-h forecasts of 500-hPa GPH (black contours; contour interval is 30 m), 500-hPa absolute vorticity (shaded), and 500-hPa winds (barbs) for the forecast initialized at 0000 UTC 19 Nov 2009.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Fig. 3.
Fig. 3.

The 0-, 12-, and 24-h forecasts of SLP (black contours; contour interval is 2 hPa), 925-hPa temperature (shaded), and 10-m winds (barbs) for the forecast initialized at 0000 UTC 19 Nov 2009.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Fig. 4.
Fig. 4.

Ensemble sensitivity of the 24-h cyclone central pressure with respect to 0-h SLP (shaded) and ensemble mean 0-h SLP (black contours; contour interval is 2 hPa) valid at 0000 UTC 19 Nov 2009.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Figure 5 illustrates how ensemble sensitivity, along with observations, may be valuable in revealing ensemble subsets that are more skillful than the full ensemble. For a single landfalling cyclone (details on the entire cyclone dataset are presented below in section 2d), the 6-h sea level pressure (SLP) differences between a randomly chosen ensemble member (used as truth) and every other ensemble member within an 80-member EnKF forecast were projected onto (multiplied by) the ensemble sensitivity of the 24-h cyclone central pressure (the response function) with respect to 6-h SLP. This calculation yields an ensemble-sensitivity-based estimate of the error in the response function through the following equation:
e1
Since the calculation in Eq. (1) utilizes the ensemble sensitivity field that intrinsically considers all state variables through ensemble covariances (described in Ancell and Hakim 2007), the subsequent ΔR values are averaged over each location where the projection was performed to become the x-axis values in Fig. 5 (the projected absolute response function errors). The actual response function error of each ensemble member at 24 h is also calculated, and represents the y-axis values.
Fig. 5.
Fig. 5.

Scatterplot of sensitivity-estimated 24-h response function error (x axis) against the actual 24-h response function error (y axis).

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Note two properties of the plot in Fig. 5: the origin is relatively close to the best-fit line through the data, and the slope is positive. These properties mean that ensemble sensitivity with observed errors at 6-h forecast time is able to provide a meaningful estimate of actual error at 24-h forecast time. In turn, a mean estimate based on a subset of 24-h projected errors (the x direction) that are closest to zero should yield the members that also have a mean close to zero in terms of the actual error (the y direction). Subsequently, the subset mean of actual 24-h errors from the subset will likely be smaller in magnitude than the mean of actual 24-h error from all members. The method would be expected to be less successful if the mean 24-h error of the full ensemble is already near zero, or if very large scatter exists. If the characteristics of Fig. 5 were consistent over many cases, substantial improvements to 24-h forecasts of high-impact events might be possible using early forecast observations and sensitivity fields.

b. The modeling system

Here, we use the Advanced Research version of the WRF Model, version 3.0.1.1 (Skamarock et al. 2008), for all experiments. The physics options used are the Mellor–Yamada–Janjić (MYJ) planetary boundary layer scheme (Janjić 1990, 1996, 2002), the Kain–Fritsch cumulus parameterization (Kain and Fritsch 1990; Kain and Fritsch 1993), the Noah land surface model (Chen and Dudhia 2001), the WRF single-moment 3-class microphysics scheme (Hong et al. 2004), the Rapid Radiative Transfer Model (RRTM) longwave radiation scheme (Mlawer et al. 1997), and the Dudhia shortwave radiation scheme (Dudhia 1989). The modeling domain is that shown in Fig. 2 and is composed of 38 vertical levels at 36-km grid spacing.

c. The ensemble system

The University of Washington ensemble Kalman filter (Torn and Hakim 2008b) is used here to provide the ensemble within which the experiments are performed. This is an 80-member square root filter (Whitaker and Hamill 2002) that assimilates thousands of observations on a 6-h assimilation cycle. These observations include Aircraft Communications Addressing and Reporting System (ACARS) winds and temperatures; satellite cloud-track winds, radiosonde temperatures, winds, and relative humidities; and surface mesonet, marine, and METAR winds, temperatures, and pressures. A fractional additive inflation technique is used here with the same parameter values that were tuned in Torn and Hakim (2008b) on a very similar grid in order to produce appropriate spread (Anderson and Anderson 1999). Other elements from Torn and Hakim (2008b) were used here, including a Gaspari–Cohn horizontal localization radius (Gaspari and Cohn 1999) of 2000 km, boundary conditions generated about Global Forecast System (GFS) forecasts through the fixed covariance perturbation technique of Torn et al. (2006), and observation errors that are assumed to be uncorrelated.

d. Experimental setup

The experiments in this study all pertain to 24-h forecasts of landfalling cyclones over the west coast of North America during the 2009/10 winter season. These cyclones are the same as those used in McMurdie and Ancell (2014), and were identified using an algorithm that searched for minima in the offshore sea level pressure field and were visually inspected for accuracy. The forecasts over which the algorithm searched were 24-h mean forecasts produced from a 6-h cycling EnKF over the entire winter season. As noted in McMurdie and Ancell (2014), the same cyclones can be used more than once in this examination since they exist at multiple times in the coastal zone, although each still provides an independent forecast trajectory along which the proposed sensitivity technique can be tested.

Ensemble sensitivity of 24-h forecast cyclone central pressure (sea level pressure within a 7 × 7 gridpoint box surrounding the mean cyclone center) for each cyclone is calculated with respect to 6-h 500- and 850-hPa geopotential heights (GPHs), winds, and temperatures, as well as 6-h SLP, 2-m temperature, and 10-m winds. These variables are used here as they are typically observable (at least in the case of winds and temperatures) and constitute common meteorological variables at standard levels, making any extension to future applications based on the techniques proposed here more straightforward. The first experiment (ENS_TRUTH) uses a single member chosen at random from the 80 ensemble members to represent truth, and comparisons are made between each of the remaining ensemble members and the truth run with regard to all of the variables listed above for which sensitivity is calculated. An additional experiment (ENS_TRUTH_ERROR) is performed that is the same as ENS_TRUTH, but with random 6-h observation errors added to the observations drawn from the truth run. These errors were drawn from a normal distribution of mean 0 and standard deviations of 1 K, 1.5 m s−1, 10 m, and 1 hPa for the simulated temperature, wind, GPH, and SLP observations, respectively. Two additional experiments are run using a completely independent truth simulation produced with the WRF Model initialized from the GFS. These experiments used both perfect observations (GFSWRF_TRUTH) and observation errors as described above (GFSWRF_TRUTH_ERROR).

Ensemble subsets are chosen by multiplying the difference between all ensemble members at 6 h by the sensitivity field (which effectively weights the errors with sensitivity values and is hereafter referred to as the projection method), as well as using RMS errors in sensitive regions (referred to as the RMS method) over the greatest 100%, 90%, 70%, 50%, 30%, 10%, and 1% of the ensemble sensitivity magnitudes. In other words, these calculations are performed respectively over the entire domain (greatest 100% of sensitivity values), and over sensitivity thresholds of 10%, 30%, 50%, 70%, 90%, and 99%. For reference, Fig. 6 depicts the successively smaller areas (shown in color) that are used to calculate subsets across increasing values of the sensitivity threshold. These experiments are designed to achieve an understanding of whether sensitive areas provide enhanced value over nonsensitive areas using the proposed techniques by increasingly utilizing areas or higher sensitivity. The subsets are chosen as the members that possess the smallest projected response function error (for the projection method) or the smallest RMS error in sensitive regions (for the RMS method) for the specified variable only (e.g., 500-hPa temperature). The ensemble subset size is also varied from 1 to 80 to reveal whether a specific subset size provides the greatest benefits. For all experiments, the subset mean is compared to the full ensemble mean for the cyclone cases to measure the skill of the techniques.

Fig. 6.
Fig. 6.

Ensemble sensitivity (shown in color) exceeding different sensitivity thresholds.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

3. Results and discussion

a. Idealized experiments

The subset mean for nearly all runs based on GPH, temperature, and winds at 500 hPa, 850 hPa, or at the surface is improved over that of the full ensemble (results summarized in Table 1). Figures 7 (raw data) and 8 (histograms of differences between the subset and full ensemble mean) show results with regard to temperature for these runs, depicting the subset mean absolute response function error relative to the absolute response function error of the full ensemble mean for all cyclone cases for the ENS_TRUTH experiments. These experiments are fairly idealized since truth was drawn from the same distribution as the ensemble (effectively implying a perfectly reliable ensemble), and neither model nor observation errors are considered. Both the projection and RMS methods are shown (calculated over the greatest 70% of the sensitivity magnitudes or, equivalently, the 30% sensitivity threshold), and an eight-member subset is used. Averaged over all cyclones, the projection method of subsetting reduces the error relative to the full ensemble by 23% (2.25 vs 1.74 hPa) for 500-hPa temperature, by 12% (2.25 vs 1.98 hPa) for 850-hPa temperature, and by 16% (2.25 vs 1.89 hPa) with regard to 2-m temperature. For the RMS method of subsetting, error reductions over the full ensemble are 19% for 500-hPa temperature (2.25 vs 1.82 hPa), 17% for 850-hPa temperature (2.25 vs 1.86 hPa), and 20% for 2-m temperature (2.25 vs 1.79 hPa). All of these subset improvements are statistically significant at the 95% confidence level using a one-sided Student’s t test. As seen in Table 1, the results with regard to other variables are quite similar, all showing significant improvements (with the exception of 500-hPa GPH).

Table 1.

Experimental results and significance for the projection and RMS techniques for the idealized runs for winds U and V, temperature T, and GPH at 500 hPa, 850 hPa, and the surface.

Table 1.
Fig. 7.
Fig. 7.

Absolute 24-h response function error for all cyclone cases for the full ensemble mean (red) and the ensemble subset (blue) for 500-hPa, 850-hPa, and 2-m temperature for both the RMS and projection techniques.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Fig. 8.
Fig. 8.

Histograms depicting the number of occurrences of the differences between the means of the ensemble subsets and the full ensemble shown by the raw data in Fig. 7 for the RMS and projection techniques with regard to 500-hPa, 850-hPa, and 2-m temperature.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

The histograms in Fig. 8 all show the negative skew of the maximum in occurrences, indicating the higher frequency of subset mean improvements over degradations, leading to the improved average performance of the subsets discussed above. The success rate of the technique (defined as the number of cyclone cases when the error of the subset mean is smaller than that of the full ensemble mean) was similar for the 500-hPa, 850-hPa, and 2-m temperature and ranged from 56% to 67%. As anticipated through the discussion in section 2a, the average error of the full ensemble mean was substantially lower for failure cases (e.g., a mean response function error of 2.79 versus 1.76 for successes and failures, respectively, for 500-hPa temperature), indicating the higher likelihood of success when the full ensemble forecast error is larger. This reveals the difficulty of the technique to reduce error when the error is already relatively small. It is also true that large degradations are less likely than large improvements with this technique. For example, only 28 of the 81 unsuccessful cases for the RMS technique with regard to 2-m temperature degrade the error by more than 1 hPa, while 62 of the 117 successful cases improve the error by more than 1 hPa.

Since it was shown above that the proposed sensitivity-based techniques for choosing improved ensemble subsets are most successful when the mean 24-h forecast was least accurate, it might be possible to improve the success rates of this method by only applying them to cases that satisfy these criteria. Of course, it would be impossible to know how good or bad the ensemble mean forecast is prior to verification. However, in a probabilistic sense under Gaussian assumptions, one could expect the mean forecast error to be larger on average for cases with larger spread. This was shown to be true in Whitaker and Loughe (1998), with the best spread–skill relationships found for cases when the spread was substantially larger than its average value. Given those results and the availability of the ensemble spread of the 24-h SLP response function prior to the occurrence of the event in question, it may be possible to apply the proposed sensitivity technique more successfully if only cases that have the largest spread are considered. Here, we choose the half of the cyclone cases from the original dataset with the largest mean forecast errors, as well as the half of the cyclone cases with the largest spread to understand if the technique’s success increases with larger mean error and/or spread. Figure 9 is the same as Fig. 7 but for cases of high error and high spread for the projection method (Fig. 10 depicts the related histograms), and Fig. 11 shows high error and high spread results for the RMS method (Fig. 12 shows related histograms). For the cases with the highest error (spread), the success rate jumps to between 71% and 80% (58% and 71%), with error reductions of the subset mean over the ensemble mean reaching 33% (30%). The associated histograms show enhanced favoring of negative values, providing another perspective on the improved performance of the subset means for cases of high error and high spread. While results show that the application to cases of highest spread (which can be done a priori) does not quite achieve the benefits for the cases of highest error (which cannot be determined before forecasts are verified), the sensitivity-based subsets are still better for cases of the highest spread than for the entire set of cyclone cases.

Fig. 9.
Fig. 9.

Absolute 24-h response function error for the cyclones with the largest ensemble mean error, as well as the largest ensemble spread, for the full ensemble mean (red) and the ensemble subset (blue) for 500-hPa, 850-hPa, and 2-m temperature for the projection technique.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Fig. 10.
Fig. 10.

Histograms depicting the number of occurrences of the differences between the means of the ensemble subsets and the full ensemble shown by the raw data in Fig. 9 for cases of high error and high spread for the projection technique with regard to 500-hPa, 850-hPa, and 2-m temperature.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Fig. 11.
Fig. 11.

As in Fig. 9, but for the RMS technique.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Fig. 12.
Fig. 12.

As in Fig. 10, but for the RMS technique.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

An important test to ensure the interpretation that the proposed subsetting techniques are successful based on their use of sensitivity information is to also examine whether the largest errors in sensitive regions degrade the forecast. This test was performed using the mean of the eight members with the largest errors in sensitive regions (as opposed to the eight members with the smallest errors in sensitive regions), and showed substantial degradations (as opposed to substantial improvements). For example, using the RMS technique with the 30% threshold for the 500-hPa temperature sensitivity field, the full ensemble mean error of 2.25 hPa was degraded by 60% to 3.60 hPa. In turn, the sensitivity field is able to discriminate between good and bad 24-h forecast members, or at least the best and worst members, at 6-h forecast time.

Two final analyses were performed with the proposed techniques: performing the technique using only sensitivity values that are tested to be significantly different than zero at 90% confidence (which might be expected to improve the method), and using sparser observations over which the comparison to ensemble members is performed in sensitive regions (which is more likely to be the case in practice). These analyses used the projection method, the 30% sensitivity threshold, an eight-member subset, and the sensitivity field with respect to temperature at all three levels. Significance testing of the sensitivity values led to mixed results, showing no change at 500 hPa, a degradation at 850 hPa (2.40 vs 1.98 hPa), and an improvement at the surface (1.69 vs 1.89 hPa). It is possible the degradation occurs from using only larger values of sensitivity, which are more likely to pass the significance test (discussed in Ancell and Hakim 2007). In any case, it seems significance testing will likely not produce overall improvements. Testing sparser observations by using points in sensitive regions to perform the projection estimates at every 2, 6, and 10 grid points resulted in little change compared with using all points in sensitive regions. Results varied from 1.80 to 1.82 hPa for 500-hPa temperature (using all grid points produced 1.74 hPa), 2.00 to 2.03 hPa for 850-hPa temperature (using all grid points produced 1.98 hPa), and 1.87 to 1.89 hPa for 2-m temperature (using all grid points produced 1.89 hPa). Thus, using sparser observations produces no to little degradation of the subset mean compared with using all points and still shows significant improvement over the full ensemble mean. This likely reveals the intrinsic covariances between neighboring points (using many points is probably redundant), and shows an advantageous characteristic of the proposed method in that similar benefits can be achieved with sparser observations, a scenario that is expected to be more likely in reality.

b. Observation error and independent truth runs

In an effort to understand whether the success of this technique within an idealized framework might extend to more realistic scenarios, additional experiments were performed that considered observational error (ENS_TRUTH_ERROR), an independent forecast for the truth run (GFSWRF_TRUTH), and both an independent truth run and observational error (GFSWRF_TRUTH_ERROR). Results were produced using the 30% sensitivity threshold, an eight-member subset, and the RMS technique. The ENS_TRUTH_ERROR experiment showed significant improvements (ranging from 24% to 36%) for the ensemble subset over that of the full ensemble mean for all variables tested (temperature, GPH, and winds at all three levels) at 95% confidence (results summarized in Table 2). Surprisingly, these improvements were larger than those associated with the ENS_TRUTH experiment that did not include observation error. The reason why the inclusion of the observation error improves the subset means is unclear and is counter to expectations. The use of observation error causes a looser fit to truth for all ensemble members generally, but results in a reordering of the fit to observations in sensitive areas such that ensemble subset means are improved. Nonetheless, this result is encouraging for the proposed technique since observation error is a characteristic the technique will always encounter with real-data cases.

Table 2.

As in Table 1, but for only the RMS technique and the experiments that include observation error and use the GFSWRF run as truth.

Table 2.

Practically no improvements were found for either of the GFSWRF experiments (GFSWRF_TRUTH and GFSWRF_TRUTH_ERROR), which used a run independent from the ensemble as truth. Furthermore, the differences between these two runs were statistically insignificant. These runs incorporate some degree of model error since the initial conditions from the truth run are based on a different model (the GFS), but do not fully represent the model error since the truth run integration within WRF uses the same physics as the ensemble. The lack of success is probably because the full ensemble error averaged over all cases is lower (by about 17%) when using the independent GFSWRF run as truth (instead of a randomly chosen ensemble member), providing relatively small errors in the first place that are difficult for the subset techniques to further reduce. Even when considering only the cyclone cases with the highest spread, improvements were only a couple of percent and were not significant. These results show the potential limitations of the proposed technique when some degree of model error exists and full ensemble mean errors are relatively small.

c. Sensitivity threshold and ensemble subset size

Figure 13 compares for 500-hPa, 850-hPa, and 2-m temperature the average subset errors for all cyclones when using different thresholds of sensitivity magnitudes (0%, 10%, 30%, 50%, 70%, 90%, and 99%) over which the 6-h RMS errors and response function projections are calculated. The reasons for this comparison are to 1) test whether the sensitivity guidance improves subsets over considering subsets not using sensitivity guidance (equivalent to comparing errors to truth over the whole domain, or using a sensitivity threshold of 0%), and 2) determine whether an optimal threshold of sensitivity values exists for the proposed technique. Results are different for the RMS and projection methods. For the projection method, a general upward trend in subset mean error exists as increasingly larger sensitivity thresholds are considered (smaller and smaller calculation areas), particularly above the 50% threshold at 850 and 500 hPa, and across all thresholds at 2 m. At 850 hPa (500 hPa), there is no statistically significant improvement at thresholds of 50% (70%) and below. For all three levels, results are significantly better (at the 95% confidence level) using the whole field than by using only the highest sensitivity values (thresholds of 90% and greater). This indicates that the projection method, which involves weighting errors by sensitivity values, is increasingly more successful as more sensitivity values are considered. Such a result comes as no surprise since it might be expected that ignoring information regarding the evolution of error (portions of the sensitivity field) would degrade the forecast skill. Furthermore, this shows the inability of ensemble sensitivity (in comparison to adjoint sensitivity) to identify localized regions of so-called key analysis errors examined in Klinker et al. (1998) that almost completely and exclusively influence forecast skill. As discussed in Ancell and Hakim (2007), this likely is a result of the fundamental difference between the two types of sensitivity, regarding both their characteristic scales and whether they account for background error covariances.

Fig. 13.
Fig. 13.

Ensemble subset mean errors averaged over all cyclones using different sensitivity thresholds for both the RMS and projection techniques.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

Results using the RMS method show a different pattern of success for the different sensitivity thresholds. For all levels, the greatest success is achieved at the 30% sensitivity threshold, and the subset errors at this threshold are significantly smaller than either calculating errors over the entire field (the 0% threshold) or only the greatest 1% of sensitivity values (the 99% threshold). Most importantly, the reduction of error at the 30% threshold relative to the 0% threshold reveals the value of calculating errors in only sensitive regions (specified by some cutoff) relative to considering the smallest errors averaged across the entire domain. Unlike the projection method, the RMS method does not weight the error by sensitivity values and, thus, shows a benefit to using more sensitive regions. However, subset errors also get significantly larger when using only the smallest, most sensitive areas to calculate the 6-h RMS errors. Like results associated with the projection method, this probably indicates the detrimental effects of ignoring subsequently larger areas, which contain beneficial sensitivity information. In turn, this indicates a limitation for the proposed techniques: if fewer observations were available (such as in more realistic situations), even in the most sensitive regions, the improvements are likely to be less [similar to results found for the observation targeting in Bergot (1999)]. In any case, both the RMS and projection techniques demonstrate added value through sensitivity-based subsets given the appropriate threshold, and such a threshold would need to be determined for any specific choice of forecast aspect, as well as the ensemble and modeling configuration.

Subset size was also tested to reveal whether a specific number of subset members was optimal. Figure 14 shows the average ensemble subset errors over all cyclones for different subset sizes for both the RMS and projection methods. These results utilized the 30% sensitivity threshold and are with regard to 2-m temperature (results were similar for other variables and levels). Note that the 80-member subset is equivalent to the full ensemble and is the same for both methods (2.25 hPa). Both methods are nearly the same for a subset size of 1, which actually produces larger errors than the full ensemble mean. Otherwise, both methods show very similar behavior—an optimal range of subset sizes exists at member counts that are less than half of the full ensemble—results that are significantly better than those using 1 or 80 members. This optimal range is across slightly smaller subset sizes for the RMS method (roughly 5–20 members) than for the projection method (roughly 10–30 members).

Fig. 14.
Fig. 14.

Ensemble subset mean errors for the projection (green) and RMS (red) techniques for different subset sizes.

Citation: Weather and Forecasting 31, 3; 10.1175/WAF-D-15-0121.1

4. Summary and conclusions

An ensemble-based sensitivity technique was tested in this study on a large number of high-impact synoptic-scale events—landfalling midlatitude cyclones—to determine whether ensemble subsets could improve predictability over the full ensemble. Since ensemble sensitivity identifies features at early forecast times that are relevant to the predictability of high-impact events later in the forecast, it was hypothesized that choosing the ensemble members early in a forecast window with the smallest errors in sensitive regions should produce improved forecasts of the chosen metric later in time. Since a number of observations generally become available hourly, this technique may be able to improve the predictability of high-impact events prior to the next data assimilation and extended forecast cycle, particularly within computationally expensive systems run at fine scales that only produce extended forecasts perhaps once or twice daily. The chosen response function for this study was the 24-h forecast sea level pressure surrounding the center of the ensemble mean landfalling cyclones on the west coast of North America. Errors at 6-h forecast time of all ensemble members were calculated to choose a subset of the full ensemble with the smallest errors over sensitive regions. Both sensitivity-weighted errors (the projection method) as well as RMS errors in sensitive regions (the RMS method) were tested in choosing the subsets. The technique was applied independently to 500- and 850-hPa GPH, winds, and temperatures, as well as surface temperatures, winds, and SLP.

Ensemble subsets based on both the projection and RMS techniques were able to improve forecasts over the full ensemble when the nature run was chosen from one of the ensemble members. These successes show that to some degree, the four criteria needed for the method’s success are being met in the experiments performed here (sufficient ensemble spread, lack of excessive nonlinearity in the evolution of ensemble perturbations, lack of excessive model error, and the existence of observations in sensitive areas). For the projection method, the largest error reduction was realized when using the entire sensitivity field, while the RMS method produced optimal results when using the greatest 70% of the sensitivity magnitudes. The ensemble subset size was also shown to influence the success of the two approaches, with 5–30 members (out of 80) yielding the smallest errors. To introduce some degree of realism, observation errors were added to the truth simulation when applying the techniques, which did not introduce any degradation to the subsetting results and, in fact, improved them. Both the projection and RMS approaches show that the use of ensemble sensitivity, given the appropriate choice of ensemble subset size and the use of a sensitivity threshold, has a fundamental ability to reduce the forecast error associated with specific, high-impact aspects of the atmospheric state.

The success of the technique was most pronounced for the cases of the largest error within the full ensemble. While such errors are impossible to predict directly prior to a high-impact event occurring, it may be possible to estimate when large errors are most likely through spread–skill relationships since ensemble spread is available a priori. Applying the subsetting techniques to the cases of highest forecast spread indeed showed more success than when the full set of cyclones was used (but not as much success as with the cases of the highest error). Nonetheless, this suggests the technique will be most successful if applied to cases exhibiting the largest forecast spread.

Improvements were not found using the proposed techniques when the truth run was initialized by an independent model. While this lack of improvement may simply be a result of the independent run being too close to the ensemble mean (both systems assimilated the same data), it may indicate that either model error, an ensemble that is not perfectly calibrated, or some combination of both may substantially reduce the usefulness of the sensitivity-based methods examined here. More generally, it would be expected that results will be limited in cases where the truth lies well outside the ensemble envelope, and the statistics within the ensemble poorly characterize the truth. In turn, the proposed techniques are likely to be most effective when run within a well-calibrated ensemble in cases of little model error. Consequently, future work testing the proposed techniques should focus on whether the success found within the idealized experiments can be reproduced in more realistic situations where model error and imperfectly calibrated ensemble systems can be difficult to avoid.

It should be noted that while the univariate approach within a multivariate system used here is a relatively simple way to estimate the best ensemble members in an operational environment, multivariate regression approaches may improve the method. For example, when a chosen response is dynamically influenced by a number of different variables somewhat independently, multivariate methods might improve the estimated forecast response dependence on early forecast variables, allowing more skillful subsets to be chosen. Hacker and Lei (2015) show the benefits of such a multivariate technique within a simplified model, and provide insights into how the univariate sensitivity-based method here might be expanded to a multivariate framework, providing potential improvements.

The purpose of this study was to demonstrate whether the proposed sensitivity-based subsetting techniques have a fundamental ability to be successful; this ability has indeed been shown. A logical next step is to extend the proposed technique to additional high-impact events, and explore their success within more realistic scenarios. It would be particularly interesting to determine whether subsets chosen based on ensemble sensitivity can add forecast value for cases of severe convection when nonlinearity is significant and forecast response functions may possess very non-Gaussian distributions. In cases such as these, surface mesonet data may be the critical source of observations to the subsetting techniques since data aloft may be sparse relative to the sensitivity field. It should be noted, however, that representativeness error may reduce the value of the techniques tested here, and this issue should be vetted in any future studies that apply these methods. Nonetheless, given the successful application of ensemble sensitivity to convective events in Bednarczyk and Ancell (2015) and Hill et al. (2016, manuscript submitted to Mon. Wea. Rev.), a subsequent study is planned to apply the proposed subsetting technique to a number of severe convective cases that exhibit bimodal behavior—a particularly difficult forecasting problem that may benefit from its use.

Acknowledgments

This work was supported by the NOAA CSTAR program under Grant NA14NWS-4680017. The author wishes to thank the staff of the Texas Tech High Performance Computing Center for the maintenance and upkeep of the large amount of data used in this work. The author also thanks the three anonymous reviewers involved who provided a large number of helpful comments and suggestions that led to substantial improvements of the manuscript.

REFERENCES

  • Ancell, B. C., 2013: Nonlinear characteristics of ensemble perturbation evolution and their application to forecasting high-impact events. Wea. Forecasting, 28, 13531365, doi:10.1175/WAF-D-12-00090.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., and Hakim G. J. , 2007: Comparing ensemble and adjoint sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 41174134, doi:10.1175/2007MWR1904.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., and McMurdie L. A. , 2013: Ensemble adaptive data assimilation techniques applied to land-falling North American cyclones. Data Assimilation for Atmospheric, Oceanic, and Hydrologic Applications, S. K. Park and L. Xu, Eds., Vol. 2, Springer, 555–575.

  • Anderson, J. L., and Anderson S. L. , 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 27412758, doi:10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Baker, N., and Daley R. , 2000: Observation and background sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 14311454, doi:10.1002/qj.49712656511.

    • Search Google Scholar
    • Export Citation
  • Bednarczyk, C. N., and Ancell B. C. , 2015: Ensemble sensitivity analysis applied to a southern plains convective event. Mon. Wea. Rev., 143, 230249, doi:10.1175/MWR-D-13-00321.1.

    • Search Google Scholar
    • Export Citation
  • Bergot, T., 1999: Adaptive observations during FASTEX: A systematic survey of upstream flights. Quart. J. Roy. Meteor. Soc., 125, 32713298, doi:10.1002/qj.49712556108.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., Etherton B. J. , and Majumdar S. J. , 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, doi:10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., and Montani A. , 1999: Targeting observations using singular vectors. J. Atmos. Sci., 56, 29652985, doi:10.1175/1520-0469(1999)056<2965:TOUSV>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Cardinali, C., 2009: Monitoring the observation impact on the short range forecast. Quart. J. Roy. Meteor. Soc., 135, 239250, doi:10.1002/qj.366.

    • Search Google Scholar
    • Export Citation
  • Chen, F., and Dudhia J. , 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Dong, L., and Zhang F. , 2016: OBEST: An observation-based ensemble subsetting technique for tropical cyclone track prediction. Wea. Forecasting, 31, 5770, doi:10.1175/WAF-D-15-0056.1.

    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 30773107, doi:10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Errico, R. M., 1997: What is an adjoint model? Bull. Amer. Meteor. Soc., 78, 25772591, doi:10.1175/1520-0477(1997)078<2577:WIAAM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Etherton, B. J., 2007: Preemptive forecasts using an ensemble Kalman filter. Mon. Wea. Rev., 135, 34843495, doi:10.1175/MWR3480.1.

  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and Cohn S. E. , 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., and Zhu Y. , 2009: Examination of observation impacts derived from observing system experiments (OSEs) and adjoint models. Tellus, 61A, 179193, doi:10.1111/j.1600-0870.2008.00388.x.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., Langland R. H. , Rohaly G. D. , and Rosmond T. E. , 1999: An assessment of the singular-vector approach to targeted observing using the FASTEX dataset. Quart. J. Roy. Meteor. Soc., 125, 32993327, doi:10.1002/qj.49712556109.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., Langland R. H. , Pellerin S. , and Todling R. , 2010: The THORPEX observation impact intercomparison experiment. Mon. Wea. Rev., 138, 40094025, doi:10.1175/2010MWR3393.1.

    • Search Google Scholar
    • Export Citation
  • Hacker, J. P., and Lei L. , 2015: Multivariate ensemble sensitivity with localization. Mon. Wea. Rev., 143, 20132027, doi:10.1175/MWR-D-14-00309.1.

    • Search Google Scholar
    • Export Citation
  • Hakim, G. J., and Torn R. D. , 2008: Ensemble synoptic analysis.Synoptic–Dynamic Meteorology and Weather Analysis and Forecasting: A Tribute to Fred Sanders, Meteor. Monogr., No. 55, Amer. Meteor. Soc., 147–161.

  • Hanley, K. E., Kirshbaum D. J. , Roberts N. M. , and Leoncini G. , 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112133, doi:10.1175/MWR-D-12-00013.1.

    • Search Google Scholar
    • Export Citation
  • Hill, A. J., Weiss C. C. , and Ancell B. C. , 2013: Utilizing ensemble sensitivity for data denial experiments on the 4 April 2012 Dallas, Texas dryline-initiated convective outbreak using West Texas Mesonet observations and WRF-DART data assimilation. Proc. 15th Conf. on Mesoscale Processes, Portland, OR, Amer. Meteor. Soc., 11. [Available online at https://ams.confex.com/ams/15MESO/webprogram/Paper227902.html.]

  • Hong, S.-Y., Dudhia J. , and Chen S.-H. , 2004: A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation. Mon. Wea. Rev., 132, 103120, doi:10.1175/1520-0493(2004)132<0103:ARATIM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1990: The step-mountain coordinate: Physical package. Mon. Wea. Rev., 118, 14291443, doi:10.1175/1520-0493(1990)118<1429:TSMCPP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1996: The surface layer in the NCEP Eta Model. Preprints, 11th Conf. on Numerical Weather Prediction, Norfolk, VA, Amer. Meteor. Soc., 354–355.

  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp. [Available online at http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.]

  • Kain, J. S., and Fritsch J. M. , 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47, 27842802, doi:10.1175/1520-0469(1990)047<2784:AODEPM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Fritsch J. M. , 1993: Convective parameterization for mesoscale models: The Kain–Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

  • Klinker, E., Rabier F. , and Gelaro R. , 1998: Estimation of key analysis errors using the adjoint technique. Quart. J. Roy. Meteor. Soc., 124, 19091933, doi:10.1002/qj.49712455007.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and Baker N. L. , 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189201, doi:10.1111/j.1600-0870.2004.00056.x.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and Coauthors, 1999: The North Pacific Experiment (NORPEX-98): Targeted observations for improved North American weather forecasts. Bull. Amer. Meteor. Soc., 80, 13631384, doi:10.1175/1520-0477(1999)080<1363:TNPENT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Liu, H., and Zou X. , 2001: The impact of NORPEX targeted dropsondes on the analysis and 2–3-day forecasts of a landfalling Pacific winter storm using NCEP 3DVAR and 4DVAR systems. Mon. Wea. Rev., 129, 19872004, doi:10.1175/1520-0493(2001)129<1987:TIONTD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Liu, J., and Kalnay E. , 2008: Estimating observation impact without adjoint model in an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 134, 13271335, doi:10.1002/qj.280.

    • Search Google Scholar
    • Export Citation
  • Madaus, L. E., and Hakim G. J. , 2015: Rapid, short-term ensemble forecast adjustment through offline data assimilation. Quart. J. Roy. Meteor. Soc., 141, 26302642, doi:10.1002/qj.2549.

    • Search Google Scholar
    • Export Citation
  • Mass, C. F., and Dotson B. , 2010: Major extratropical cyclones of the northwest United States: Historical review, climatology, and synoptic environment. Mon. Wea. Rev., 138, 24992527, doi:10.1175/2010MWR3213.1.

    • Search Google Scholar
    • Export Citation
  • McMurdie, L. A., and Ancell B. C. , 2014: Predictability characteristics of landfalling cyclones along the North American west coast. Mon. Wea. Rev., 142, 301319, doi:10.1175/MWR-D-13-00141.1.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., Taubman S. J. , Brown P. D. , Iacono M. J. , and Clough S. A. , 1997: Radiative transfer for inhomogeneous atmosphere: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 66316 682, doi:10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

  • Torn, R. D., 2010: Ensemble-based sensitivity analysis applied to African easterly waves. Wea. Forecasting, 25, 6178, doi:10.1175/2009WAF2222255.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and Hakim G. J. , 2008a: Ensemble-based sensitivity analysis. Mon. Wea. Rev., 136, 663677, doi:10.1175/2007MWR2132.1.

  • Torn, R. D., and Hakim G. J. , 2008b: Performance characteristics of a pseudo-operational ensemble Kalman filter. Mon. Wea. Rev., 136, 39473963, doi:10.1175/2008MWR2443.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and Hakim G. J. , 2009: Initial condition sensitivity of western Pacific extratropical transitions determined using ensemble-based sensitivity analysis. Mon. Wea. Rev., 137, 33883406, doi:10.1175/2009MWR2879.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and Romine G. S. , 2015: Sensitivity of central Oklahoma convection forecasts to upstream potential vorticity anomalies during two strongly forced cases during MPEX. Mon. Wea. Rev., 143, 40644087, doi:10.1175/MWR-D-15-0085.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., Hakim G. J. , and Snyder C. , 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Search Google Scholar
    • Export Citation
  • Tremolet, Y., 2008: Computation of observation sensitivity and observation impact in incremental variational data assimilation. Tellus, 60A, 964978, doi:10.1111/j.1600-0870.2008.00349.x.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and Loughe A. F. , 1998: The relationship between ensemble spread and ensemble mean skill. Mon. Wea. Rev., 126, 32923302, doi:10.1175/1520-0493(1998)126<3292:TRBESA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and Hamill T. M. , 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, doi:10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zack, J., Natenberg E. , Young S. , Manobianco J. , and Kamath C. , 2010a: Application of ensemble sensitivity analysis to observation targeting for short-term wind speed forecasting. Lawrence Livermore National Laboratory Tech. Rep. LLNL-TR-42442, 32 pp. [Available online at http://computation.llnl.gov/projects/starsapphire-data-driven-modeling-analysis/LLNL-TR-424442.pdf.]

  • Zack, J., Natenberg E. , Young S. , Knowe G. V. , Waight K. , Manobianco J. , and Kamath C. , 2010b: Application of ensemble sensitivity analysis to observation targeting for short-term wind speed forecasting in the Tehachapi region winter season. Lawrence Livermore National Laboratory Tech. Rep. LLNL-TR-460956, 57 pp. [Available online at http://computation.llnl.gov/projects/starsapphire-data-driven-modeling-analysis/LLNL-TR-460956.pdf.]

  • Zack, J., Natenberg E. , Young S. , Knowe G. V. , Waight K. , Manobianco J. , and Kamath C. , 2010c: Application of ensemble sensitivity analysis to observation targeting for short-term wind speed forecasting in the Washington-Oregon region. Lawrence Livermore National Laboratory Tech. Rep. LLNL-TR-458086, 65 pp. [Available online at http://computation.llnl.gov/projects/starsapphire-data-driven-modeling-analysis/LLNL-TR-458086.pdf.]

Save
  • Ancell, B. C., 2013: Nonlinear characteristics of ensemble perturbation evolution and their application to forecasting high-impact events. Wea. Forecasting, 28, 13531365, doi:10.1175/WAF-D-12-00090.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., and Hakim G. J. , 2007: Comparing ensemble and adjoint sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 41174134, doi:10.1175/2007MWR1904.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., and McMurdie L. A. , 2013: Ensemble adaptive data assimilation techniques applied to land-falling North American cyclones. Data Assimilation for Atmospheric, Oceanic, and Hydrologic Applications, S. K. Park and L. Xu, Eds., Vol. 2, Springer, 555–575.

  • Anderson, J. L., and Anderson S. L. , 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev., 127, 27412758, doi:10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Baker, N., and Daley R. , 2000: Observation and background sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 14311454, doi:10.1002/qj.49712656511.

    • Search Google Scholar
    • Export Citation
  • Bednarczyk, C. N., and Ancell B. C. , 2015: Ensemble sensitivity analysis applied to a southern plains convective event. Mon. Wea. Rev., 143, 230249, doi:10.1175/MWR-D-13-00321.1.

    • Search Google Scholar
    • Export Citation
  • Bergot, T., 1999: Adaptive observations during FASTEX: A systematic survey of upstream flights. Quart. J. Roy. Meteor. Soc., 125, 32713298, doi:10.1002/qj.49712556108.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., Etherton B. J. , and Majumdar S. J. , 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436, doi:10.1175/1520-0493(2001)129<0420:ASWTET>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., and Montani A. , 1999: Targeting observations using singular vectors. J. Atmos. Sci., 56, 29652985, doi:10.1175/1520-0469(1999)056<2965:TOUSV>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Cardinali, C., 2009: Monitoring the observation impact on the short range forecast. Quart. J. Roy. Meteor. Soc., 135, 239250, doi:10.1002/qj.366.

    • Search Google Scholar
    • Export Citation
  • Chen, F., and Dudhia J. , 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Dong, L., and Zhang F. , 2016: OBEST: An observation-based ensemble subsetting technique for tropical cyclone track prediction. Wea. Forecasting, 31, 5770, doi:10.1175/WAF-D-15-0056.1.

    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 30773107, doi:10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Errico, R. M., 1997: What is an adjoint model? Bull. Amer. Meteor. Soc., 78, 25772591, doi:10.1175/1520-0477(1997)078<2577:WIAAM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Etherton, B. J., 2007: Preemptive forecasts using an ensemble Kalman filter. Mon. Wea. Rev., 135, 34843495, doi:10.1175/MWR3480.1.

  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and Cohn S. E. , 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., and Zhu Y. , 2009: Examination of observation impacts derived from observing system experiments (OSEs) and adjoint models. Tellus, 61A, 179193, doi:10.1111/j.1600-0870.2008.00388.x.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., Langland R. H. , Rohaly G. D. , and Rosmond T. E. , 1999: An assessment of the singular-vector approach to targeted observing using the FASTEX dataset. Quart. J. Roy. Meteor. Soc., 125, 32993327, doi:10.1002/qj.49712556109.

    • Search Google Scholar
    • Export Citation
  • Gelaro, R., Langland R. H. , Pellerin S. , and Todling R. , 2010: The THORPEX observation impact intercomparison experiment. Mon. Wea. Rev., 138, 40094025, doi:10.1175/2010MWR3393.1.

    • Search Google Scholar
    • Export Citation
  • Hacker, J. P., and Lei L. , 2015: Multivariate ensemble sensitivity with localization. Mon. Wea. Rev., 143, 20132027, doi:10.1175/MWR-D-14-00309.1.

    • Search Google Scholar
    • Export Citation
  • Hakim, G. J., and Torn R. D. , 2008: Ensemble synoptic analysis.Synoptic–Dynamic Meteorology and Weather Analysis and Forecasting: A Tribute to Fred Sanders, Meteor. Monogr., No. 55, Amer. Meteor. Soc., 147–161.

  • Hanley, K. E., Kirshbaum D. J. , Roberts N. M. , and Leoncini G. , 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112133, doi:10.1175/MWR-D-12-00013.1.

    • Search Google Scholar
    • Export Citation
  • Hill, A. J., Weiss C. C. , and Ancell B. C. , 2013: Utilizing ensemble sensitivity for data denial experiments on the 4 April 2012 Dallas, Texas dryline-initiated convective outbreak using West Texas Mesonet observations and WRF-DART data assimilation. Proc. 15th Conf. on Mesoscale Processes, Portland, OR, Amer. Meteor. Soc., 11. [Available online at https://ams.confex.com/ams/15MESO/webprogram/Paper227902.html.]

  • Hong, S.-Y., Dudhia J. , and Chen S.-H. , 2004: A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation. Mon. Wea. Rev., 132, 103120, doi:10.1175/1520-0493(2004)132<0103:ARATIM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1990: The step-mountain coordinate: Physical package. Mon. Wea. Rev., 118, 14291443, doi:10.1175/1520-0493(1990)118<1429:TSMCPP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1996: The surface layer in the NCEP Eta Model. Preprints, 11th Conf. on Numerical Weather Prediction, Norfolk, VA, Amer. Meteor. Soc., 354–355.

  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp. [Available online at http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.]

  • Kain, J. S., and Fritsch J. M. , 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47, 27842802, doi:10.1175/1520-0469(1990)047<2784:AODEPM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Fritsch J. M. , 1993: Convective parameterization for mesoscale models: The Kain–Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

  • Klinker, E., Rabier F. , and Gelaro R. , 1998: Estimation of key analysis errors using the adjoint technique. Quart. J. Roy. Meteor. Soc., 124, 19091933, doi:10.1002/qj.49712455007.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and Baker N. L. , 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189201, doi:10.1111/j.1600-0870.2004.00056.x.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and Coauthors, 1999: The North Pacific Experiment (NORPEX-98): Targeted observations for improved North American weather forecasts. Bull. Amer. Meteor. Soc., 80, 13631384, doi:10.1175/1520-0477(1999)080<1363:TNPENT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Liu, H., and Zou X. , 2001: The impact of NORPEX targeted dropsondes on the analysis and 2–3-day forecasts of a landfalling Pacific winter storm using NCEP 3DVAR and 4DVAR systems. Mon. Wea. Rev., 129, 19872004, doi:10.1175/1520-0493(2001)129<1987:TIONTD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Liu, J., and Kalnay E. , 2008: Estimating observation impact without adjoint model in an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 134, 13271335, doi:10.1002/qj.280.

    • Search Google Scholar
    • Export Citation
  • Madaus, L. E., and Hakim G. J. , 2015: Rapid, short-term ensemble forecast adjustment through offline data assimilation. Quart. J. Roy. Meteor. Soc., 141, 26302642, doi:10.1002/qj.2549.

    • Search Google Scholar
    • Export Citation
  • Mass, C. F., and Dotson B. , 2010: Major extratropical cyclones of the northwest United States: Historical review, climatology, and synoptic environment. Mon. Wea. Rev., 138, 24992527, doi:10.1175/2010MWR3213.1.

    • Search Google Scholar
    • Export Citation
  • McMurdie, L. A., and Ancell B. C. , 2014: Predictability characteristics of landfalling cyclones along the North American west coast. Mon. Wea. Rev., 142, 301319, doi:10.1175/MWR-D-13-00141.1.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., Taubman S. J. , Brown P. D. , Iacono M. J. , and Clough S. A. , 1997: Radiative transfer for inhomogeneous atmosphere: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 66316 682, doi:10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

  • Torn, R. D., 2010: Ensemble-based sensitivity analysis applied to African easterly waves. Wea. Forecasting, 25, 6178, doi:10.1175/2009WAF2222255.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and Hakim G. J. , 2008a: Ensemble-based sensitivity analysis. Mon. Wea. Rev., 136, 663677, doi:10.1175/2007MWR2132.1.

  • Torn, R. D., and Hakim G. J. , 2008b: Performance characteristics of a pseudo-operational ensemble Kalman filter. Mon. Wea. Rev., 136, 39473963, doi:10.1175/2008MWR2443.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and Hakim G. J. , 2009: Initial condition sensitivity of western Pacific extratropical transitions determined using ensemble-based sensitivity analysis. Mon. Wea. Rev., 137, 33883406, doi:10.1175/2009MWR2879.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and Romine G. S. , 2015: Sensitivity of central Oklahoma convection forecasts to upstream potential vorticity anomalies during two strongly forced cases during MPEX. Mon. Wea. Rev., 143, 40644087, doi:10.1175/MWR-D-15-0085.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., Hakim G. J. , and Snyder C. , 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Search Google Scholar
    • Export Citation
  • Tremolet, Y., 2008: Computation of observation sensitivity and observation impact in incremental variational data assimilation. Tellus, 60A, 964978, doi:10.1111/j.1600-0870.2008.00349.x.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and Loughe A. F. , 1998: The relationship between ensemble spread and ensemble mean skill. Mon. Wea. Rev., 126, 32923302, doi:10.1175/1520-0493(1998)126<3292:TRBESA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and Hamill T. M. , 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, doi:10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zack, J., Natenberg E. , Young S. , Manobianco J. , and Kamath C. , 2010a: Application of ensemble sensitivity analysis to observation targeting for short-term wind speed forecasting. Lawrence Livermore National Laboratory Tech. Rep. LLNL-TR-42442, 32 pp. [Available online at http://computation.llnl.gov/projects/starsapphire-data-driven-modeling-analysis/LLNL-TR-424442.pdf.]

  • Zack, J., Natenberg E. , Young S. , Knowe G. V. , Waight K. , Manobianco J. , and Kamath C. , 2010b: Application of ensemble sensitivity analysis to observation targeting for short-term wind speed forecasting in the Tehachapi region winter season. Lawrence Livermore National Laboratory Tech. Rep. LLNL-TR-460956, 57 pp. [Available online at http://computation.llnl.gov/projects/starsapphire-data-driven-modeling-analysis/LLNL-TR-460956.pdf.]

  • Zack, J., Natenberg E. , Young S. , Knowe G. V. , Waight K. , Manobianco J. , and Kamath C. , 2010c: Application of ensemble sensitivity analysis to observation targeting for short-term wind speed forecasting in the Washington-Oregon region. Lawrence Livermore National Laboratory Tech. Rep. LLNL-TR-458086, 65 pp. [Available online at http://computation.llnl.gov/projects/starsapphire-data-driven-modeling-analysis/LLNL-TR-458086.pdf.]

  • Fig. 1.

    Illustration of the typical timing constraints of a system that employs 6-h assimilation cycling with 36-h extended forecasts.

  • Fig. 2.

    The 0-, 12-, and 24-h forecasts of 500-hPa GPH (black contours; contour interval is 30 m), 500-hPa absolute vorticity (shaded), and 500-hPa winds (barbs) for the forecast initialized at 0000 UTC 19 Nov 2009.

  • Fig. 3.

    The 0-, 12-, and 24-h forecasts of SLP (black contours; contour interval is 2 hPa), 925-hPa temperature (shaded), and 10-m winds (barbs) for the forecast initialized at 0000 UTC 19 Nov 2009.

  • Fig. 4.

    Ensemble sensitivity of the 24-h cyclone central pressure with respect to 0-h SLP (shaded) and ensemble mean 0-h SLP (black contours; contour interval is 2 hPa) valid at 0000 UTC 19 Nov 2009.

  • Fig. 5.

    Scatterplot of sensitivity-estimated 24-h response function error (x axis) against the actual 24-h response function error (y axis).

  • Fig. 6.

    Ensemble sensitivity (shown in color) exceeding different sensitivity thresholds.

  • Fig. 7.

    Absolute 24-h response function error for all cyclone cases for the full ensemble mean (red) and the ensemble subset (blue) for 500-hPa, 850-hPa, and 2-m temperature for both the RMS and projection techniques.

  • Fig. 8.

    Histograms depicting the number of occurrences of the differences between the means of the ensemble subsets and the full ensemble shown by the raw data in Fig. 7 for the RMS and projection techniques with regard to 500-hPa, 850-hPa, and 2-m temperature.

  • Fig. 9.

    Absolute 24-h response function error for the cyclones with the largest ensemble mean error, as well as the largest ensemble spread, for the full ensemble mean (red) and the ensemble subset (blue) for 500-hPa, 850-hPa, and 2-m temperature for the projection technique.

  • Fig. 10.

    Histograms depicting the number of occurrences of the differences between the means of the ensemble subsets and the full ensemble shown by the raw data in Fig. 9 for cases of high error and high spread for the projection technique with regard to 500-hPa, 850-hPa, and 2-m temperature.

  • Fig. 11.

    As in Fig. 9, but for the RMS technique.

  • Fig. 12.

    As in Fig. 10, but for the RMS technique.

  • Fig. 13.

    Ensemble subset mean errors averaged over all cyclones using different sensitivity thresholds for both the RMS and projection techniques.

  • Fig. 14.

    Ensemble subset mean errors for the projection (green) and RMS (red) techniques for different subset sizes.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 612 228 68
PDF Downloads 416 97 3