1. Introduction
As observing systems have been rapidly enhanced with satellite observations and various field campaigns, it has become important to be able to assess the impact of each observation. For estimating the observation impact on numerical weather prediction (NWP), observing system experiments (OSEs), in which data assimilation and forecast experiments with and without the specified data are performed in parallel and compared, have been a widely used approach (e.g., Bouttier and Kelly 2001; English et al. 2004; Lord et al. 2004; Kelly et al. 2007). These data denial experiments have played a central role at operational NWP centers in evaluating the usefulness of additional data (e.g., Cardinali et al. 2003; Goldberg et al. 2003).
In 2008, The Observing System Research and Predictability Experiment (THORPEX) Pacific Asian Regional Campaign (T-PARC; Parsons et al. 2008) was conducted in the western North Pacific cooperated with the Tropical Cyclone Structure (TCS08) field experiment (Elsberry and Harr 2008). During the experimental period, intensive observations for tropical cyclones (TCs) were performed, including dropsondes deployed by research reconnaissance flights [e.g., the U.S. Air Force WC-130J, the Naval Research Laboratory (NRL) P-3, and the Falcon 20 aircraft of the Deutsches Zentrum für Luft- und Raumfahrt (DLR)] in addition to the Dropwindsonde Observations for Typhoon Surveillance near the Taiwan Region (DOTSTAR; Wu et al. 2005) astra jet, as well as upper soundings by vessels and Meteorological Satellite-2 (MTSAT-2) rapid-scan atmospheric motion vectors by the Japan Meteorological Agency (JMA). These intensive observations around TCs mainly over the ocean are extremely valuable and useful for improving TC forecasts and studying TC structures (e.g., Wu et al. 2007; Chou and Wu 2008). Previous studies performed data-denial experiments to find the impact of the special soundings during T-PARC, and the results showed generally improved typhoon track forecasts as a result of the T-PARC observations (Harnisch and Weissmann 2010; Chou et al. 2011; Weissmann et al. 2011).
The T-PARC project aimed to investigate the potential of targeted observation strategy, in which sensitivity analysis methods suggest targeted regions for effective observations. During T-PARC operations, several sensitivity analysis products were monitored in real time, when the reconnaissance flight tracks were planned (Komori et al. 2010; Reynolds et al. 2010). Some flight tracks were actually designed to better cover those sensitive regions. Two sensitivity analysis methods can be used for this purpose: one based on the singular vectors (SVs; e.g., Palmer et al. 1998; Buizza and Montani 1999; Gelaro et al. 1999; Langland et al. 1999; Aberson 2003; Wu et al. 2005; Majumdar et al. 2006; Reynolds et al. 2007; Wu et al. 2009), and the other based on the ensemble spread (e.g., Ancell and Hakim 2007). This study does not focus on these “targeting sensitivity methods” that find sensitive areas for targeted observations in geographical space, but instead, focuses on “forecast sensitivity to observations” (i.e., methods for estimating the forecast impact of observations in observational space). Although the observation-impact estimates cannot identify the targeted regions in real time, they are still very useful for monitoring the actual impact of the observations offline, as well as for scientific studies to identify the most effective observations to forecast specific weather phenomena.
Langland and Baker (2004) developed the adjoint sensitivity method for estimating the impact of observations on a selected measure of the short-range forecast error without data-denial experiments. This method allows observation impact to be partitioned for any subset of observations, such as by instrument type, observed variable, and location. They tested this method with the Navy Operational Global Atmospheric Prediction System (NOGAPS) and showed the significant advantage that observation impact could be efficiently estimated without the need of selectively adding or removing data. In addition, they pointed out that this method could be used as a diagnostic tool to monitor the quality of observations, which could contribute to the construction of better observing networks. With the adjoint-based method, Langland (2005) investigated observational sensitivities of all regular satellite and in situ data as well as targeted dropsonde profiles provided by the THORPEX regional campaign field program in 2003. He found that targeted dropsonde data had a high impact per observation although satellite observations provided the largest total contribution during the experimental period. Zhu and Gelaro (2008) examined the adjoint sensitivity method with the National Centers for Environmental Prediction (NCEP) Global Data Assimilation System (GDAS), showing that it provided accurate estimates of the impact of wind, temperature, and moisture observations as well as satellite radiance observations, as in Baker and Daley (2000). More recently, Gelaro et al. (2010) compared the observation impact on short-range forecast in different assimilation and forecast systems using the adjoint-based technique. The different systems showed good agreement of the global impact of the major observation types, although the regional details showed substantial differences. Interestingly, although Atmospheric Infrared Sounder (AIRS) radiances had a large overall positive impact, individual observations were almost as likely to have a negative as a positive impact.
Liu and Kalnay (2008) proposed an ensemble-based method for estimating the observation impact on short-range forecast with an ensemble Kalman filter, and Li et al. (2010) corrected a small error in the derivation. The ensemble sensitivity method accomplishes the same goal as the adjoint-based method, but without using an adjoint model. They tested this approach using the Lorenz 40-variable model (Lorenz and Emanuel 1998) with synthetic observations, and showed that the observation impact estimated from the ensemble sensitivity method was similar to that from the adjoint-based method. Ancell and Hakim (2007) have performed ensemble sensitivity experiments with real observations, but using a different approach than the method of Liu and Kalnay (2008).
In this study, following Liu and Kalnay (2008), the ensemble-based method for estimating the impact of observations is implemented with the Weather Research and Forecasting (WRF) model (Skamarock et al. 2005) and the local ensemble transform Kalman filter (LETKF; Ott et al. 2004; Hunt et al. 2007) and is tested with real observations using the WRF-LETKF system (Miyoshi and Kunii 2012). An additional enhancement is developed to introduce a targeted area where the impact is evaluated. The impact of observations including dropsondes is investigated for the case of Typhoon Sinlaku (2008). The ensemble-based method with the additional enhancement introduced in this study is described in section 2. The experimental design is described in section 3, and the results are presented in section 4. Summary and conclusions are discussed in section 5.
2. Ensemble sensitivity method
a. Observation sensitivity
















Since
b. Additional enhancement: Targeted verifications



3. Experimental design
In this study, the WRF-LETKF system of Miyoshi and Kunii (2012) is employed with the same parameters except for increasing the ensemble size from 27 to 40. Namely, the Advanced Research WRF (ARW) model version 3.2 is set up in the northwestern Pacific region surrounded by the equator, 55°N, 100°E, and 180° with the Mercator projection with 60-km grid spacing (137 × 109 horizontal grid points) and 40 vertical levels up to 10 hPa. As for the physical processes, the Kain–Fritsch (Kain and Fritsch 1993) convective parameterization scheme, the WRF Single-Moment 3-class Microphysics scheme (WSM3; Hong et al. 2004), the Yonsei University (YSU) boundary layer scheme (Hong et al. 2006), the Rapid Radiative Transfer Model (RRTM) longwave radiation scheme, and the Dudhia shortwave radiation scheme are employed. The LETKF analyzes all prognostic variables: three-dimensional wind components (u, υ, w), temperature (T), pressure (p), geopotential height (gh), humidity (qv), and water/ice microphysics variables. The localization parameters of the LETKF are chosen to be 400 km in the horizontal, 0.4 lnp in the vertical, and 3 h in time; the equivalent radii of influences are about 1460 km and 1.46 lnp. For covariance inflation, the adaptive inflation scheme (Miyoshi 2011), which estimates the inflation factors adaptively at each grid point, is employed. Miyoshi and Kunii (2012) showed that adaptive inflation outperformed fixed multiplicative inflation with WRF-LETKF using real observations. The data assimilation cycle interval is set to 6 h, in which observations separated into hourly bins are assimilated using the four dimensional (4D) LETKF (Hunt et al. 2004). Miyoshi and Kunii (2012) obtained reasonable results with these parameters in the case of Sinlaku (2008), and sensitivities to the parameter choices are beyond the scope of this study.
The lateral and lower boundary conditions are supplied from the NCEP GDAS analysis fields, also known as the NCEP final analysis (FNL). Although it would be desirable to use perturbed boundary conditions in the LETKF cycle (e.g., Saito et al. 2011), Miyoshi and Kunii (2012) showed that the lateral boundary perturbations did not have a significant impact in the inner domain in the Typhoon Sinlaku (2008) case. As for the observation data, real observations used in the NCEP GDAS, including upper sounding data from radiosondes and dropsondes (ADPUPA), surface stations (ADPSFC), ships and buoys (SFCSHP), aircrafts (AIRCFT), wind profilers (PROFLR), velocity azimuth display winds (VADWND), and satellite-based winds (SATWND, SPSSMI, QKSWND) are assimilated. These observation data in the PREPBUFR format (Keyser 2010) are obtained from the University Corporation for Atmospheric Research (UCAR) data server. Satellite radiances are not used in this study.
The ensemble-based sensitivity method was applied to the case of Typhoon Sinlaku in September 2008. Sinlaku was generated at 1800 UTC 8 September in the eastern offshore region of the Philippines and moved northward with steady intensification. Its minimum central pressure reached 935 hPa at 1200 UTC 10 September after a relatively rapid intensification with the minimum pressure dropping by 40 hPa in the previous 24 h. Sinlaku made landfall on Taiwan at around 1900 UTC 13 September with a central pressure of 960 hPa. After passing over Taiwan, it recurved east of China and passed through the southern offshore of Japan after a slight reintensification. The best track of the Regional Specialized Meteorological Center (RSMC) Tokyo Typhoon Center for the typhoon is shown in Fig. 1.

The best track of Typhoon Sinlaku from 0000 UTC 8 Sep (0800) to 0000 UTC 20 Sep (2000) 2008 (solid line) and positions of dropsondes deployed from the WC-130J (closed circle), NRL P-3 (closed triangle), DLR Falcon (open triangle), and DOTSTAR (open square) during the intensification and mature stages of Typhoon Sinlaku.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

The best track of Typhoon Sinlaku from 0000 UTC 8 Sep (0800) to 0000 UTC 20 Sep (2000) 2008 (solid line) and positions of dropsondes deployed from the WC-130J (closed circle), NRL P-3 (closed triangle), DLR Falcon (open triangle), and DOTSTAR (open square) during the intensification and mature stages of Typhoon Sinlaku.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
The best track of Typhoon Sinlaku from 0000 UTC 8 Sep (0800) to 0000 UTC 20 Sep (2000) 2008 (solid line) and positions of dropsondes deployed from the WC-130J (closed circle), NRL P-3 (closed triangle), DLR Falcon (open triangle), and DOTSTAR (open square) during the intensification and mature stages of Typhoon Sinlaku.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
The WRF-LETKF data assimilation cycle is initiated at 1200 UTC 3 September 2008 with ensemble initial conditions from randomly chosen dates during August 2007 and 2008. The ensemble sensitivity calculations were performed during the intensification and mature stages of Sinlaku from 1200 UTC 8 September to 0000 UTC 13 September 2008. First, the observation impact was computed based on the 12-h forecast error reduction measured by the KE norm, that is, the cost function [Eq. (1)] was designated as the forecast error difference between a 12-h forecast started at time 0 and a 18-h forecast started at time −6. Additionally, experiments with a targeted norm were conducted to examine its effect on the ensemble sensitivity calculations.
To validate the observation sensitivity impact estimates, data-denial experiments were also carried out, in which the influence of T-PARC special soundings on the subsequent model forecast was investigated. A 36-h control forecast (CTRL) from the LETKF analysis with all observations was performed using the nested WRF-ARW with a horizontal resolution of 12 km. Two additional experiments were performed, in which either the negative- or positive-impact observations were excluded based on the 12-h observational sensitivity (DN12 and DP12) to validate the original observation sensitivity experiments. Moreover, in order to investigate the sensitivity to the verification forecast length, an additional experiment was performed by denying negative-impact observations based on the 6-h observational sensitivity (DN06). Note that the 12-h observational sensitivity is evaluated based on the 12-h forecast errors e12|0 and e12|−6, and similarly, the 6-h observational sensitivity is based on the 6-h forecast errors e6|0 and e6|−6.
4. Results
a. Observation impact for the entire domain
First, the observation impact was computed for the entire domain. The observation impact for each instrument type from 1200 UTC 8 September to 0000 UTC 13 September 2008 is shown in Fig. 2. Overall, all types of observations contribute to reducing forecast errors consistently. ADPUPA has the largest total contribution, followed by SATWND, ADPSFC, and QKSWND. The impact of ADPUPA accounts for as much as 63% of total forecast error reduction, while the number of the ADPUPA is about 37% of all observations. This indicates that the in situ observations such as ADPUPA and ADPSFC are major contributors. Figure 2 also shows observation impacts per observation (i.e., normalized by the number of observations) for each type of instruments. AIRCFT indicates the largest per count impact than the other instrument types. In addition, ADPUPA and SFCSHP also show relatively large per count contributions to the forecast error reduction. These results indicate, not surprisingly, that isolated in situ observations tend to have a larger influence on the forecast compared with the densely distributed data such as the satellite observations.

Observation impacts (black bars, kinetic energy, kJ kg−1) and the number of observations (gray bars) of each instrument type summed from 1200 UTC 8 Sep to 0000 UTC 13 Sep 2008. White bars indicate the impact per observation.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Observation impacts (black bars, kinetic energy, kJ kg−1) and the number of observations (gray bars) of each instrument type summed from 1200 UTC 8 Sep to 0000 UTC 13 Sep 2008. White bars indicate the impact per observation.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Observation impacts (black bars, kinetic energy, kJ kg−1) and the number of observations (gray bars) of each instrument type summed from 1200 UTC 8 Sep to 0000 UTC 13 Sep 2008. White bars indicate the impact per observation.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Figure 3 shows the geographical distributions of observation impact of ADPUPA, SATWND, and SPSSMI along with the horizontal map of 6-h accumulated precipitation and mean sea level pressure at 0000 UTC 11 September 2008. The impact from individual ADPUPA profiles is larger than from other observations, as is indicated by the 10-times-larger color scale of the figure. In particular, the largest positive impact is found southeast of Taiwan, where Sinlaku was located in its mature stage and where the dropsondes were deployed from the DOTSTAR reconnaissance flight. The special soundings around Sinlaku significantly contributed to reducing the forecast errors. In contrast, rawinsondes over China show mixed impact probably because they are near the western boundary of the model domain and are generally denser over China than in other regions. It is noted that the evaluated observation impacts correspond well to the general pattern of disturbances, such as Typhoon Sinlaku near Taiwan, and the strong precipitation area off the eastern shore of Japan (Fig. 3d). This is reasonable since the forecasts would be most different in the presence of disturbances.

Horizontal distributions of observation impacts (kinetic energy, J kg−1) of (a) ADPUPA, (b) SATWND, and (c) SPSSMI along with (d) the horizontal map of 6-h accumulated precipitation (mm) and mean sea level pressure (hPa) at 0000 UTC 11 Sep 2008. Note that the scale for ADPUPA impact is 10 times larger than for satellite observations.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Horizontal distributions of observation impacts (kinetic energy, J kg−1) of (a) ADPUPA, (b) SATWND, and (c) SPSSMI along with (d) the horizontal map of 6-h accumulated precipitation (mm) and mean sea level pressure (hPa) at 0000 UTC 11 Sep 2008. Note that the scale for ADPUPA impact is 10 times larger than for satellite observations.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Horizontal distributions of observation impacts (kinetic energy, J kg−1) of (a) ADPUPA, (b) SATWND, and (c) SPSSMI along with (d) the horizontal map of 6-h accumulated precipitation (mm) and mean sea level pressure (hPa) at 0000 UTC 11 Sep 2008. Note that the scale for ADPUPA impact is 10 times larger than for satellite observations.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Figure 3 shows as many observations degrading forecasts as those improving forecasts. This is consistent with Gelaro et al. (2010), who showed only slightly more than half of the observations reduced the forecast error, while the rest increased it. This may appear to contradict a possible expectation that a large majority of observations should contribute to improve the analysis and subsequent forecasts. If the observation errors have a Gaussian probability distribution, there can be a few errors arbitrarily large. Observations with such occasional large errors would degrade forecasts. In this case, observation errors could interact with the dynamical instabilities of the chaotic system. Although these may partially explain why there are so many appearances of observations with negative impacts, this is an issue not yet well understood, and this and other studies on forecast sensitivity do not yet provide sufficient understating of why this happens and how to avoid the negative impacts. Yet, as shown in Fig. 2, on the average, the statistical expectation shows that all types of observations actually improve the forecasts. Further discussions and investigations remain an important subject of future research.
b. Results with a targeted norm
To evaluate the observation impact on the forecast of Sinlaku, a targeted norm was defined by the kinetic energy norm [Eq. (6)] between the lowest model level and a level near 150 hPa over a square area (10° by 10°) around Sinlaku (Fig. 4b). With the targeted norm, the impacts of observations are localized around the targeted region shown by the small square shaded in yellow (Fig. 4b), which is in contrast to the case without the targeted norm (Fig. 4a). The observation localization plays an essential role in eliminating spurious impact patterns at distant locations, corresponding to the 1460-km radius of influence from the edges of the targeted region approximately shown by the dashed circle. Although the impacts of observations far from the targeted area become weaker, there is a clear correspondence between Figs. 4a and 4b near the targeted domain. Interestingly, with the targeted norm, the large observation impact appears mainly in the western side (i.e., upstream) of the targeted region. This agrees with our common understanding that observations would have significant influence on the direct downstream neighbors in a short-range forecast. Since the targeted norm also contributed to faster computations, the targeted norm is used in the following results whenever focusing on Sinlaku.

Observation impacts (kinetic energy, J kg−1) of all instrument types (a) without the targeted norm and (b) with the targeted norm over the square area (10° by 10°) around Sinlaku (dashed box with yellow shade). The influence area is approximately depicted with a dashed circle.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Observation impacts (kinetic energy, J kg−1) of all instrument types (a) without the targeted norm and (b) with the targeted norm over the square area (10° by 10°) around Sinlaku (dashed box with yellow shade). The influence area is approximately depicted with a dashed circle.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Observation impacts (kinetic energy, J kg−1) of all instrument types (a) without the targeted norm and (b) with the targeted norm over the square area (10° by 10°) around Sinlaku (dashed box with yellow shade). The influence area is approximately depicted with a dashed circle.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
c. Observation impact of T-PARC dropsondes
During the intensification and mature stages of Sinlaku, specifically from 0000 UTC 9 September to 0000 UTC 13 September 2008, a relatively large number of dropsondes were deployed from the WC-130J, NRL P-3, DLR Falcon, and DOTSTAR reconnaissance flights in the vicinity of Sinlaku. The locations of these dropsondes are shown in Fig. 1 along with the best track of Sinlaku. The WC-130J penetrated Sinlaku near the 700-hPa level and released dropsondes at the relatively low level. The DOTSTAR and DLR Falcon operated their observations around the TC based on the flight strategies to utilize the sensitive region estimated by multiple sensitivity analysis products available at that time.
Recent studies (e.g., Harnisch and Weissmann 2010; Chou et al. 2011; Weissmann et al. 2011) showed that dropsondes in the vicinity of TCs had positive impact on their track forecasts using data-denial experiments. Motivated by the general interest in these special soundings for TCs, we investigated their impacts on the TC forecast using the ensemble-based sensitivity method. Figure 5 shows the impacts of the dropsondes deployed from the WC-130J, the NRL P-3, and the DLR Falcon during the intensification and mature stages. Overall, most dropsondes indicate positive impact, contributing to reducing the forecast errors.

Observation impacts on 12-h forecast error change (kinetic energy, J kg−1) near the TC center of dropsondes deployed from (a) the WC-130J, (b) the NRL P-3, and (c) the DLR Falcon. These data were assimilated at (a) 0600 UTC 10 Sep, (b) 0000 UTC 11 Sep, and (c) 0600 UTC 11 Sep 2008. Lower-level dropsondes (lower than 600 hPa) are denoted by squares, and missing observations are indicated by closed triangles. Red numbers index the dropsondes.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Observation impacts on 12-h forecast error change (kinetic energy, J kg−1) near the TC center of dropsondes deployed from (a) the WC-130J, (b) the NRL P-3, and (c) the DLR Falcon. These data were assimilated at (a) 0600 UTC 10 Sep, (b) 0000 UTC 11 Sep, and (c) 0600 UTC 11 Sep 2008. Lower-level dropsondes (lower than 600 hPa) are denoted by squares, and missing observations are indicated by closed triangles. Red numbers index the dropsondes.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Observation impacts on 12-h forecast error change (kinetic energy, J kg−1) near the TC center of dropsondes deployed from (a) the WC-130J, (b) the NRL P-3, and (c) the DLR Falcon. These data were assimilated at (a) 0600 UTC 10 Sep, (b) 0000 UTC 11 Sep, and (c) 0600 UTC 11 Sep 2008. Lower-level dropsondes (lower than 600 hPa) are denoted by squares, and missing observations are indicated by closed triangles. Red numbers index the dropsondes.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
The dropsondes from the NRL P-3 in the eastern vicinity of Sinlaku shows a strong positive impact (Fig. 5b). By contrast, Fig. 5a shows negative impacts of the two dropsondes (numbers 18 and 19) from the WC-130J at similar locations in the northeastern quadrant. A similar feature is also observed in the result of DLR Falcon shown in Fig. 5c. These inconsistencies may be due to the difference between the first guess and true fields, and probably the different stages of the TC life cycle evolution, since the NRL P-3 and the DLR Falcon flew about 18 and 24 h later than the WC-130J, respectively. Figure 6 depicts the minimum sea level pressure (MSLP) of Sinlaku in the WRF-LETKF analysis along with the best-track data. Although the MSLP of the LETKF analysis is weaker than that of the best-track data because of the insufficient model resolution, the tendency of the MSLP is well captured in the WRF-LETKF with a relatively small time lag. At 0600 UTC 10 September, when the dropsondes were deployed from the WC-130J, the simulated TC was in the developing stage. Then, 18 h later at 0000 UTC 11 September, when the dropsondes were released from the NRL P-3, Sinlaku was already in its mature stage.

MSLP of Sinlaku in the WRF-LETKF analysis and the best-track data.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

MSLP of Sinlaku in the WRF-LETKF analysis and the best-track data.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
MSLP of Sinlaku in the WRF-LETKF analysis and the best-track data.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
As addressed in Aberson (2008) and Weissmann et al. (2011), assimilating dropsonde data near the TC inner core may lead to forecast degradation possibly due to the model’s inability to resolve the inner-core structure. However, Fig. 5a shows positive-impact dropsonde data in the inner core (the fourteenth dropsonde). This dropsonde includes only temperature profiles, since the NCEP PREPBUFR data do not include other components such as winds, probably because of quality control in the NCEP GDAS. This suggests that temperature observations in the TC inner-core region could lead to forecast improvement, although wind and pressure observations may degrade the forecast.
As for the DOTSTAR dropsondes (Fig. 7a) around Sinlaku, the southern four observations (numbers 6, 7, 8, and 12) show a significant negative impact. The DOTSTAR flight path was designed to have better coverage in the south of the TC because of the guidance from the sensitivity analysis products available at that time for targeted observations in geographical space (not in observation space) suggesting sensitive areas in the south (Fig. 8). Both sensitivity products of the JMA and the NRL based on the SV method indicate high sensitivity in the east as well as to the south of Sinlaku. This would suggest that the special soundings over the sensitive areas should have a positive impact on the short-range forecast, so that we might suspect that those negative impact observations may be erroneous. However, when we replaced the first guess with the 6-h forecast from the NCEP FNL, while keeping the ensemble perturbations from the WRF-LETKF, these southern four observations turned to have a positive impact (Fig. 7c). This may raise a doubt about the robustness of the method, but from their definition, the impact estimates do depend on the first guess, the observation error settings, and the choice of data assimilation system. For example, replacing the first guess changes not only v0 (observation minus first guess) and et|−6 (forecast minus analysis at time t) in Eq. (5), but also

As in Fig. 5, but for the dropsondes deployed from the DOTSTAR and assimilated at 0000 UTC 11 Sep 2008. These impacts were evaluated based on (a) 12-h forecast error change, (b) 6-h forecast error change, and (c) 12-h forecast error change with the NCEP global analysis.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

As in Fig. 5, but for the dropsondes deployed from the DOTSTAR and assimilated at 0000 UTC 11 Sep 2008. These impacts were evaluated based on (a) 12-h forecast error change, (b) 6-h forecast error change, and (c) 12-h forecast error change with the NCEP global analysis.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
As in Fig. 5, but for the dropsondes deployed from the DOTSTAR and assimilated at 0000 UTC 11 Sep 2008. These impacts were evaluated based on (a) 12-h forecast error change, (b) 6-h forecast error change, and (c) 12-h forecast error change with the NCEP global analysis.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Sensitivity analysis products used in T-PARC operations from (a) JMA and (b) NRL, valid at 0000 UTC 11 Sep 2008 (Komori et al. 2010, courtesy of JMA and NRL). Both sensitivity products are based on the SV method using the total energy norm.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Sensitivity analysis products used in T-PARC operations from (a) JMA and (b) NRL, valid at 0000 UTC 11 Sep 2008 (Komori et al. 2010, courtesy of JMA and NRL). Both sensitivity products are based on the SV method using the total energy norm.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Sensitivity analysis products used in T-PARC operations from (a) JMA and (b) NRL, valid at 0000 UTC 11 Sep 2008 (Komori et al. 2010, courtesy of JMA and NRL). Both sensitivity products are based on the SV method using the total energy norm.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
To investigate whether negative- or positive-impact estimates actually correspond to degradation or improvement of the subsequent model forecasts, the data-denial experiments (DN12, DP12, and DN06) were performed for dropsondes deployed from the DOTSTAR around 0000 UTC 11 September 2008 in addition to the CTRL experiment that uses all observations. In the DN12 and DP12 experiments, each forecast was initialized by the LETKF analysis fields without the sixth, seventh, eighth, twelfth, and fifteenth DOTSTAR dropsondes (Fig. 9b) or third, fourth, tenth, sixteenth, and seventeenth DOTSTAR dropsondes (Fig. 9c), respectively, based on the 12-h forecast error reduction valid at 1200 UTC 11 September (Fig. 7a). Since DN12 denied the five negative-impact observations, five observations with the largest positive impacts were excluded in DP12. In the DN06 experiments, seventh, eighth, twelfth, and fifteenth observations (Fig. 9a) were denied according to the estimated sensitivity of the DOTSTAR dropsondes based on 6-h forecast error reduction valid at 0600 UTC 11 September (Fig. 7b). These DOTSTAR impact estimates based on the 12- and 6-h forecast error reductions (Figs. 7a,b) show very similar results (i.e., negative impacts tend to locate in the south of the TC). The most notable difference is found in the sixth dropsonde, which changed the sign between the 12- and 6-h forecast sensitivities.

Denied DOTSTAR dropsonde observations (×) in the (a) DN06, (b) DN12, and (c) DP12 experiments, and simulated typhoon track initialized from the analysis fields in the (d) DN06, (e) DN12, and (f) DP12 experiments along with the track in the CTRL experiment and the best-track data. The closed triangle of the thirteenth dropsonde indicates missing data.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Denied DOTSTAR dropsonde observations (×) in the (a) DN06, (b) DN12, and (c) DP12 experiments, and simulated typhoon track initialized from the analysis fields in the (d) DN06, (e) DN12, and (f) DP12 experiments along with the track in the CTRL experiment and the best-track data. The closed triangle of the thirteenth dropsonde indicates missing data.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Denied DOTSTAR dropsonde observations (×) in the (a) DN06, (b) DN12, and (c) DP12 experiments, and simulated typhoon track initialized from the analysis fields in the (d) DN06, (e) DN12, and (f) DP12 experiments along with the track in the CTRL experiment and the best-track data. The closed triangle of the thirteenth dropsonde indicates missing data.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
The 36-h typhoon track forecasts initialized by the analysis fields calculated by denying the selected data are shown in Figs. 9d–f. DN06 shows improved 6-h TC forecast position as expected, leading to improvements of track forecasts in later forecast hours (Fig. 9d). Similarly, DN12 shows improved 12-h TC forecast position, which leads to consistent improvements during the 36-h period (Fig. 9e). When the five positive-impact observations were denied (DP12), 12-h TC forecast position was slightly degraded as expected, leading to overall degradation of the track forecast (Fig. 9f). To investigate the differences caused by these data denials, Fig. 10 shows the differences of the analysis fields between CTRL and each experiment at 0000 UTC 11 September. The wind direction is different by almost 90° between DN06 and DN12 around the sixth observation location of the DOTSTAR dropsondes (indicated by the red circles, denied in DN12, used in DN06). The DN12 experiment shows a larger northeastward flow component than CTRL, whereas the southeastward component is identified in the DN06 experiment. The northeastward component in the DN12 experiment seems to shift the TC track to the northeast, which might contribute to the larger reduction of the track error compared to DN06. The increments of geopotential height are similar, but it becomes slightly lower around the TC center in the DN12 experiment. As for the DP12 experiment, a distinctly positive or negative increment south or north of the storm is analyzed in the geopotential field, resulting in the enhancement of northward movement of the TC. Although the track differences become more distinct after 18-h forecasts, understanding the causes is not straightforward. They may be related to model errors and other complicated factors in the longer leads, and such further investigations on the improvement after 18-h forecasts are beyond the scope of this study.

Difference of the analyzed geopotential height (shade, m) and wind (vector) fields at the 500-hPa level at 0000 UTC 11 Sep 2008 between CTRL and (a) DN06, (b) DN12, and (c) DP12. Contours indicate geopotential height in each experiment and the red circles are the location of the sixth DOTSTAR dropsondes assimilated in DN06, but removed in DN12.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

Difference of the analyzed geopotential height (shade, m) and wind (vector) fields at the 500-hPa level at 0000 UTC 11 Sep 2008 between CTRL and (a) DN06, (b) DN12, and (c) DP12. Contours indicate geopotential height in each experiment and the red circles are the location of the sixth DOTSTAR dropsondes assimilated in DN06, but removed in DN12.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Difference of the analyzed geopotential height (shade, m) and wind (vector) fields at the 500-hPa level at 0000 UTC 11 Sep 2008 between CTRL and (a) DN06, (b) DN12, and (c) DP12. Contours indicate geopotential height in each experiment and the red circles are the location of the sixth DOTSTAR dropsondes assimilated in DN06, but removed in DN12.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
Using the data-denial experiments, the actual forecast error change can be evaluated quantitatively by computing the targeted KE norm of the difference from the CTRL experiment. This error change was estimated by the ensemble sensitivity method and can be compared with the actual error change to verify the estimates. Figure 11 shows that DN12 provides the largest error reduction of the 12-h forecast, that DN06 provides the largest error reduction of the 6-h forecast, and that DP12 provides a significant error increase of the 12-h forecast. These results validate the ensemble sensitivity method. Figure 11 also shows that the ensemble sensitivity method in all three cases underestimates the error change. This is similar to the idealized cases of Liu and Kalnay (2008) and Li et al. (2010), and may be caused by localization and sampling error due to the limited ensemble size. Generally, a stronger localization corresponds to more underestimation and vice versa. However, localization is an integral part of the LETKF algorithm and the choice of the localization parameters is tightly connected to the analysis accuracy. Therefore, the relationship between the localization parameters and observation-impact estimates is considerably complex; a careful investigation on the sensitivity to localization parameters is important, but beyond the scope of this study.

The 6- and 12-h forecast error changes (kinetic energy, J kg−1) by denying selected observations at 1200 UTC 11 Sep 2008, and the estimated impact of denied observations from the ensemble sensitivity method. Negative values indicate the forecast error reduction. The estimated impact in DN06 is based on the 6-h forecast error reduction, and the impacts in DN12 and DP12 are based on the 12-h forecast error reduction.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1

The 6- and 12-h forecast error changes (kinetic energy, J kg−1) by denying selected observations at 1200 UTC 11 Sep 2008, and the estimated impact of denied observations from the ensemble sensitivity method. Negative values indicate the forecast error reduction. The estimated impact in DN06 is based on the 6-h forecast error reduction, and the impacts in DN12 and DP12 are based on the 12-h forecast error reduction.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
The 6- and 12-h forecast error changes (kinetic energy, J kg−1) by denying selected observations at 1200 UTC 11 Sep 2008, and the estimated impact of denied observations from the ensemble sensitivity method. Negative values indicate the forecast error reduction. The estimated impact in DN06 is based on the 6-h forecast error reduction, and the impacts in DN12 and DP12 are based on the 12-h forecast error reduction.
Citation: Monthly Weather Review 140, 6; 10.1175/MWR-D-11-00205.1
5. Summary and discussion
This study applied the ensemble sensitivity method (Liu and Kalnay 2008) for estimating the impact of observations on short-range forecasts with the WRF-LETKF system. This method has the same goals as the adjoint-based method (Langland and Baker 2004), but without using an adjoint model. The ensemble sensitivity method was applied for the case of Typhoon Sinlaku (2008) with the additional enhancement of introducing a targeted area essential for high-dimensional real applications, such as answering questions like “which observations resulted in the failure of a 3-day forecast over this area?” The results showed that all types of observations had an overall positive impact on subsequent model forecast over the periods of the intensification and mature stages of Sinlaku, and that the upper soundings (ADPUPA) had the largest contribution. The estimated observation impacts were validated quantitatively by performing observing system experiments, in which observations with positive or negative impacts were selectively excluded in the LETKF data assimilation. The results indicated that the ensemble-based sensitivity methods could capture the actual error reduction although there was significant underestimation probably due to localization and sampling error.
The ensemble-based method is a useful and efficient tool for estimating observation impacts on forecast fields. However, it should be noted that observations with an estimated negative impact in a particular experiment do not always degrade NWP because the estimation can depend on the first-guess fields, model dynamics, data assimilation system, and selected norms measuring forecast error reductions. This is due to the fact that the observation sensitivity is a function of innovation vectors, Kalman gain matrix, and forecast ensemble perturbations [Eq. (4)]. Figures 7a,c showed different observation-impact estimates due to the use of different first-guess fields.
In this study, the impact of the T-PARC dropsonde observations around Sinlaku was estimated for a single case with a relatively coarse 60-km resolution. It may be interesting to investigate more TC cases to find general tendencies on what observations (horizontal and vertical location and variable) are more important in improving TC forecasts at each stage of the TC life cycle evolution. Perhaps estimated impacts would depend on the location relative to the TC center, its asymmetries, as well as on the TC life cycle evolution. Such results may suggest better strategies of reconnaissance flights such as general flight patterns and levels in a non-real-time statistical basis, which would be complementary to real-time adaptive observation strategies. For TC forecasts, the model resolving capability would be crucial, particularly for observations within the TC inner core. Estimating impacts of those inner-core observations with a higher-resolution model would be an important subject of future research.
Another important area of interest to apply the ensemble sensitivity method is the assessment of satellite observations. Recent studies have suggested that satellite radiance and GPS radio occultation data have a significant impact on NWP (Zhu and Gelaro 2008; Cardinali 2009; Gelaro and Zhu 2009). Although these satellite observations have not been used in the WRF-LETKF system thus far, it would be very interesting to apply the ensemble sensitivity method to investigate the impact of such satellite data.
Acknowledgments
The authors thank the members of the UMD Weather-Chaos Group, Hong Li of Shanghai Typhoon Institute, and Peter Black and Craig Bishop of Naval Research Laboratory (NRL) for fruitful discussions. The authors also thank Carolyn Reynolds of NRL and Takuya Komori of JMA for providing the figures of sensitivity analysis. The NCEP PREPBUFR observation data were obtained from the UCAR data server, while several missing files were kindly provided by Daryl Kleist of NCEP. The authors are grateful to the reviewers for their valuable comments, which significantly improved the manuscript. This study was supported by the Office of Naval Research (ONR) Grant N000141010149 under the National Oceanographic Partnership Program (NOPP).
REFERENCES
Aberson, S. D., 2003: Targeted observations to improve operational tropical cyclone track forecast guidance. Mon. Wea. Rev., 131, 1613–1628.
Aberson, S. D., 2008: Large forecast degradations due to synoptic surveillance during the 2004 and 2005 hurricane seasons. Mon. Wea. Rev., 136, 3138–3150.
Ancell, B., and G. J. Hakim, 2007: Comparing adjoint- and ensemble-sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 4117–4134.
Baker, N., and R. Daley, 2000: Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 1431–1454.
Bouttier, F., and G. Kelly, 2001: Observing-system experiments in the ECMWF 4D-Var data assimilation system. Quart. J. Roy. Meteor. Soc., 127, 1469–1488.
Buizza, R., and A. Montani, 1999: Targeting observations using singular vectors. J. Atmos. Sci., 56, 2965–2985.
Cardinali, C., 2009: Monitoring the observation impact on the short-range forecast. Quart. J. Roy. Meteor. Soc., 135, 239–250.
Cardinali, C., L. Isaksen, and E. Andersson, 2003: Use and impact of automated aircraft data in a global 4DVAR data assimilation system. Mon. Wea. Rev., 131, 1865–1877.
Chou, K. H., and C. C. Wu, 2008: Typhoon initialization in a mesoscale model—Combination of the bogused vortex and the dropwindsonde data in DOTSTAR. Mon. Wea. Rev., 136, 865–879.
Chou, K. H., C. C. Wu, P. H. Lin, S. D. Aberson, M. Weissmann, F. Harnisch, and T. Nakazawa, 2011: The impact of dropwindsonde observations on typhoon track forecasts in DOTSTAR and T-PARC. Mon. Wea. Rev., 139, 1728–1743.
Elsberry, R. L., and P. A. Harr, 2008: Tropical Cyclone Structure (TCS08) field experiment science basis, observational platforms, and strategy. Asia-Pac. J. Atmos. Sci., 44, 209–231.
English, S., R. Saunders, B. Candy, M. Forsythe, and A. Collard, 2004: Met Office satellite data OSEs. Proc. Third WMO Workshop on the Impact of Various Observing Systems on Numerical Weather Prediction, WMO/Tech. Doc. 1228, Alpbach, Austria, WMO, 146–156.
Gelaro, R., and Y. Zhu, 2009: Examination of observation impacts derived from observing system experiments (OSEs) and adjoint models. Tellus, 61A, 179–193.
Gelaro, R., R. H. Langland, G. D. Rohaly, and T. E. Rosmond, 1999: An assessment of the singular-vector approach to targeted observing using the FASTEX dataset. Quart. J. Roy. Meteor. Soc., 125, 3299–3327.
Gelaro, R., R. H. Langland, S. Pellerin, and R. Todling, 2010: The THORPEX Observation Impact Intercomparison Experiment. Mon. Wea. Rev., 138, 4009–4025.
Goldberg, M. D., Y. Qu, L. M. McMillin, W. Wolf, L. Zhou, and M. Divakarla, 2003: AIRS near-real-time products and algorithms in support of operational numerical weather prediction. IEEE Trans. Geosci. Remote Sens., 41, 379–389.
Harnisch, F., and M. Weissmann, 2010: Sensitivity of typhoon forecasts to different subsets of targeted dropsonde observations. Mon. Wea. Rev., 138, 2664–2680.
Hong, S.-Y., J. Dudhia, and S.-H. Chen, 2004: A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation. Mon. Wea. Rev., 132, 103–120.
Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 2318–2341.
Hunt, B. R., and Coauthors, 2004: Four-dimensional ensemble Kalman filtering. Tellus, 56A, 273–277.
Hunt, B. R., E. J. Kostelich, and I. Syzunogh, 2007: Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D, 230, 112–126.
Kain, J., and J. Fritsch, 1993: Convective parameterization for mesoscale models: The Kain–Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.
Kelly, G., J.-N. Thépaut, R. Buizza, and C. Cardinali, 2007: The value of observations. I: Data denial experiments for the Atlantic and Pacific. Quart. J. Roy. Meteor. Soc., 133, 1803–1815.
Keyser, D., cited 2010: PREPBUFR processing at NCEP. [Available online at http://www.emc.ncep.noaa.gov/mmb/data_processing/prepbufr.doc/document.htm.]
Komori, T., R. Sakai, H. Yonehara, T. Kadowaki, K. Sato, T. Miyoshi, and M. Yamaguchi, 2010: Total energy singular vector guidance developed at JMA for T-PARC. RSMC Tokyo-Typhoon Center Technical Review, Vol. 12, RSMC Tokyo-Typhoon Center, Tokyo, Japan, 13–27. [Available online at http://www.jma.go.jp/jma/jma-eng/jma-center/rsmc-hp-pub-eg/techrev/text12-1-3.pdf.]
Langland, R. H., 2005: Observation impact during the North Atlantic TReC—2003. Mon. Wea. Rev., 133, 2297–2309.
Langland, R. H., and N. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189–201.
Langland, R. H., R. Gelaro, G. D. Rohaly, and M. A. Shapiro, 1999: Targeted observations in FASTEX: Adjoint-based targeting procedures and data impact experiments in IOP17 and IOP18. Quart. J. Roy. Meteor. Soc., 125, 3241–3270.
Li, H., J. Liu, and E. Kalnay, 2010: Correction of ‘Estimating observation impact without adjoint model in an ensemble Kalman filter.’ Quart. J. Roy. Meteor. Soc., 136, 1652–1654.
Liu, J., and E. Kalnay, 2008: Estimating observation impact without adjoint model in an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 134, 1327–1335.
Lord, S., T. Zapotocny, and J. Jung, 2004: Observing system experiments with NCEP’s global forecast system. Third WMO Workshop on the Impact of Various Observing Systems on Numerical Weather Prediction, WMO/Tech. Doc. 1228, Alpbach, Austria, WMO, 56–62.
Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary observation sites: Simulation with a small model. J. Atmos. Sci., 55, 399–414.
Majumdar, S. J., S. D. Aberson, C. H. Bishop, R. Buizza, M. S. Peng, and C. A. Reynolds, 2006: A comparison of adaptive observing guidance for Atlantic tropical cyclones. Mon. Wea. Rev., 134, 2354–2372.
Miyoshi, T., 2011: The Gaussian approach to adaptive covariance inflation and its implementation with the local ensemble transform Kalman filter. Mon. Wea. Rev., 139, 1519–1535.
Miyoshi, T., and M. Kunii, 2012: The local ensemble transform Kalman filter with the Weather Research and Forecasting model: Experiments with real observations. Pure Appl. Geophys., doi:10.1007/s00024-011-0373-4, in press.
Ott, E., and Coauthors, 2004: A local ensemble Kalman filter for atmospheric data assimilation. Tellus, 56A, 415–428.
Palmer, T. N., R. Gelaro, J. Barkmeijer, and R. Buizza, 1998: Singular vectors, metrics, and adaptive observations. J. Atmos. Sci., 55, 633–653.
Parsons, D., P. Harr, T. Nakazawa, S. Jones, and M. Weissmann, 2008: An overview of the THORPEX-Pacific Asian Regional Campaign (T-PARC) during August–September 2008. Preprints, 28th Conf. on Hurricanes and Tropical Meteorology, Orlando, FL, Amer. Meteor. Soc., 7C.7.
Reynolds, C. A., M. S. Peng, S. J. Majumdar, S. D. Aberson, C. H. Bishop, and R. Buizza, 2007: Interpretation of adaptive observing guidance for Atlantic tropical cyclones. Mon. Wea. Rev., 135, 4006–4029.
Reynolds, C. A., J. D. Doyle, R. M. Hodur, and H. Jin, 2010: Naval Research Laboratory multiscale targeting guidance for T-PARC and TCS-08. Wea. Forecasting, 25, 526–544.
Saito, K., M. Hara, M. Kunii, H. Seko, and M. Yamaguchi, 2011: Comparison of initial perturbation methods for the mesoscale ensemble prediction system of the Meteorological Research Institute for the WWRP Beijing 2008 Olympics Research and Development Project (B08RDP). Tellus, 63A, 445–467.
Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, W. Wang, and J. G. Powers, 2005: A description of the advanced research WRF version 2. NCAR Tech. Note NCAR/TN-468+STR, 88 pp.
Weissmann, M., and Coauthors, 2011: The influence of assimilating dropsonde data on typhoon track and midlatitude forecasts. Mon. Wea. Rev., 139, 908–920.
Wu, C. C., and Coauthors, 2005: Dropwindsonde observations for typhoon surveillance near the Taiwan region (DOTSTAR): An overview. Bull. Amer. Meteor. Soc., 86, 787–790.
Wu, C. C., K. H. Chou, P. H. Lin, S. D. Aberson, M. S. Peng, and T. Nakazawa, 2007: The impact of dropwindsonde data on typhoon track forecasts in DOTSTAR. Wea. Forecasting, 22, 1157–1176.
Wu, C. C., and Coauthors, 2009: Intercomparison of targeted observation guidance for tropical cyclones in the northwestern Pacific. Mon. Wea. Rev., 137, 2471–2492.
Zhu, Y., and R. Gelaro, 2008: Observation sensitivity calculations using the adjoint of the Gridpoint Statistical Interpolation (GSI) analysis system. Mon. Wea. Rev., 136, 335–351.