## 1. Introduction

Rapidly developing cyclones that form toward the end of the Atlantic and Pacific storm tracks are sometimes difficult to forecast. The sparsity of observational data over the oceans can result in analysis errors, which may grow rapidly in the ensuing forecast. Following the first ideas discussed at a workshop in 1995 (Snyder 1996), several field experiments have been carried out to observe atmospheric circulations in traditionally data-sparse regions and to assess whether the assimilation of extra observations in a target area can improve forecast quality in a downstream verification area. Field experiments include the Fronts and Atlantic Storm Track Experiment (FASTEX; Thorpe and Shapiro 1995), the North Pacific Experiment (NORPEX, Langland et al. 1999), the California Landfalling Jets Experiment (CALJET; Emanuel et al; 1995; Ralph et al. 1998), and the 1999 and 2000 Winter Storm Reconnaissance Experiments (WSR99; Szunyogh et al. 2000; WSR00; Szunyogh et al. 2001, respectively). Results based on the 18 cases from the Winter Storm Reconnaissance programs (Szunyogh et al. 1999; Toth et al. 2002), for example, indicated forecast improvement in 60%–70% of the cases, during which the surface pressure root-mean-square (rms) errors inside preselected verification areas have been measured to decrease by 10%. Similarly, results based on four FASTEX cases (Montani et al. 1999) reported a 15% average decrease of the rms forecast error for the 500- and 1000-hPa geopotential height fields.

One of the key problems is that it is not obvious where best to deploy the dropsonde data. Several approaches to identifying the sensitive regions have been proposed and used in targeting campaigns: the sensitivity vectors (Rabier et al. 1996; Langland et al. 1996, 1999; Gelaro et al. 1998), the ensemble transform technique (ETT; Bishop and Toth 1999), and the singular vector (SV) technique (Buizza and Montani 1999; Gelaro et al. 1999). The reader is referred to published literature (e.g., Palmer et al. 1998) for a discussion of similarities and differences among these techniques. Targeting techniques also include the quasi-inverse linear method (Pu et al. 1997, 1999) and the ensemble transform Kalman filter (ETKF) [Bishop et al. (2001), the ETKF had been used operationally as targeting guidance during the 2000, 2001, and 2002 WSR missions].

This study explores the forecast impact results from the assimilation of targeting dropsondes during NORPEX, one of the first experiments designed to investigate the possible benefits of real-time targeting. NORPEX took place in mid-January and February 1998 with dropsondes deployed by National Oceanic and Atmospheric Administration (NOAA) and U.S. Air Force aircraft to improve 1–3-day forecasts over the West Coast of the United States. Two targeting techniques were used during NORPEX: the first one based on the ETKF implemented at the National Centers for Environmental Prediction (NCEP) and the second one based on SVs computed at the Naval Research Laboratory (NRL). In the ETKF an index of analysis sensitivity computed from an ensemble of forecasts determines the target locations, while in the SV technique, target locations are defined by an index based on the weighted average of the initial-time SVs computed to maximize total energy inside the verification area.

The comparison between the two types of forecast, one starting from analysis generated with targeted dropsondes and one without, indicates that targeting was successful only in 7 out of 10 NORPEX cases (see section 3). The fact that targeting does not always reduce the forecast error may be due to many possible reasons: a wrong definition of the target area, an inconsistency between the assimilation procedure and the definition of the target area (one of the weaknesses of the “energy norm” SV targeting technique is that it does not take into account the characteristics of the data assimilation system used to assimilate targeted observations), a nonoptimal assimilation of the targeted observation, and model errors.

This paper reports results from data assimilation and forecast experiments designed to investigate possible reasons of the small positive impact on the forecast error obtained for the selected NORPEX cases. Singular vectors are used only as a diagnostic tool to investigate the impact on the forecast error of targeted dropsonde data and, thus, the strengths and weaknesses of a targeting technique based on SVs are not discussed, nor is the efficiency of the ETKF and SV targeting techniques compared. After this introduction, section 2 describes the NORPEX campaign and the SV-based diagnostic technique. Results from analysis and forecast experiments are discussed in sections 3 and 4. Conclusions are drawn in section 5.

## 2. Targeting in data-sparse midlatitude regions

### a. The NORPEX campaign

In winter 1997/98, heavy precipitation occurred over parts of California, probably associated with maximum intensity of El Niño toward the end of January. During this period, the atmospheric circulation was dominated by a strong jet-level wind with storms releasing large amounts of rain over the California coast. One of the primary goals of the NORPEX campaign was to improve the short-range forecast in a specified forecast verification area (FVA) of the western American coast. During the 27 days of the NORPEX experiment, three NOAA and two U.S. Air Force aircraft released almost 700 dropsondes over the eastern Pacific, with a horizontal separation of 100–250 km. The dropsondes provided vertical profiles of temperature, wind, humidity, and pressure from the aircraft level (300–400 hPa) to the surface. These observations were mainly released at 0000 UTC and were distributed in real time via the Global Telecommunication Service network to meteorological centers.

Some inconsistencies were found in the humidity values discouraging their use in the analyses (Jaubert et al. 1999), and thus only wind and temperature measurements have been assimilated. In the assimilation, the radiosonde observation error is assigned to the dropsondes. Targeted observations were released in areas identified by NRL and NCEP using two different techniques: NRL targets were defined using the first four SVs computed with the Navy Operational Global Atmospheric Prediction System (NOGAPS) model with a FVA (30°–60°N, 100°–130°W) and with a 2-day optimization time interval (Langland et al. 1999), while NCEP targets were defined applying the ETKF technique to NCEP and European Centre for Medium-Range Weather Forecasts (ECMWF) global ensemble forecasts, with a flow-dependent verification area and with variable lead times (1–2 days; Toth et al. 1999; Szunyogh et al. 2000). At ECMWF, data from 10 NORPEX campaigns with initial states at 0000 UTC 7, 9, 11, 15, 18, 20, 22, 25, 26, and 27 February 1998 were received. These 10 cases, chronologically numbered from 1 to 10, are investigated in this work.

### b. Singular-vector-based diagnostics

Singular vectors identify perturbations with the fastest growth during a finite time interval, called the optimization time interval. Singular vectors can grow from the analysis time *t* = 0 to the optimization time interval (“analysis SVs”), or from an initial to a final forecast (“forecast SVs”). Either type of SV forms an orthogonal basis at the initial and final times with respect to the chosen metric. The appendix briefly summarizes the SV mathematical definition.

At ECMWF, analysis SVs have been used routinely to define the initial perturbations of the Ensemble Prediction System (EPS; Buizza and Palmer 1995; Molteni et al. 1996). The rationale of this choice was that the analysis error component along the leading SVs dominates the forecast error. At the time of writing (September 2002), the SVs used to define the perturbed analysis of the ECMWF EPS are computed with a T42L40 resolution (spectral triangular truncation T42 with 40 vertical levels) and with a 48-h optimization time interval.

The use of the SVs in targeting applications is the natural extension of published results (Buizza et al. 1997; Gelaro et al. 1998) showing that the correction of the initial conditions with a perturbation defined by the leading SVs can significantly improve the forecast skill inside a chosen FVA. More specifically, a linear combination of the leading SVs can be used to define the pseudoinverse initial perturbation that can correct most of the forecast error inside the FVA. The pseudoinverse is computed from the forecast error projection onto the evolved SVs inside the FVA (appendix). The forecast error reduction induced by the pseudoinverse initial perturbation can be used as an upper-bound forecast error reduction that can be achieved by adding a small perturbation to the initial conditions (Buizza and Montani 1999). For this reason, the pseudoinverse initial perturbation is used as a reference initial perturbation in this study.

It should be pointed out that during real-time targeting experiments forecast SVs growing from a 24- or 36-h forecast are used instead of analysis SVs to allow a sufficient lead time for flight preparation. By contrast, analysis SVs are used in this study as a diagnostic tool for an a posteriori assessment of the impact of targeted observations in 10 NORPEX cases. As shown by Gelaro et al. (1999), analysis SVs are more appropriate than forecast SVs for diagnosing forecast behavior and investigating possible reasons of success and failure of targeting experiments. The reader is also referred to Buizza and Montani (1999) for a discussion on the similarities among analysis and forecast SVs computed with different lead times.

The analysis SVs (hereafter simply called SVs) have been computed with a T63L31 model version with simplified dry (i.e., without any moist process included in the tangent and adjoint model version) physics (Buizza 1994), with a 48-h optimization time period and with the final-time total energy optimized inside a fixed FVA defined by the coordinates 30°–60°N and 100°–130°W (i.e., the area used by NRL during the real-time experiment). Note that the same resolution T63L31 is used in the four-dimensional variational data assimilation (4DVAR) experiments to compute the analysis increments (see section 3).

Computer resources limit to a few tens the number of SVs that can be routinely computed in a reasonable amount of time. Figure 1 shows the percentage of forecast error that projects onto a different number of leading SVs. Results show that this percentage is fast increasing up to 44% (on average) for the first 10 vectors, while it increases only by another 3% (on average) by adding the next 10 leading SVs. This suggests that using 10 SVs is a good compromise between a realistic representation of the fast-growing error component and the high SV computational cost. It is worth mentioning that during the NORPEX campaign, only four SVs were used to identify sensitive regions (Langland et al. 1999).

Singular vector growth is measured using a total energy norm, which is the most commonly used metric in predictability studies (Palmer et al. 1998). As a consequence, the SVs (and the pseudoinverse) are computed without any knowledge of the characteristics of the data assimilation system used to generate the analysis (e.g., observation and background covariance matrices). Thus, there is no guarantee that the pseudoinverse perturbation is similar to the modification induced by the assimilation of the extra targeted observations. In the adaptive observation problem, information about the analysis errors distribution can have a significant impact on targeting location and optimal sampling of the observations. Ehrendorfer and Tribbia (1997) suggested a way to link SV structures and data assimilation systems by using as the initial norm an estimate of the analysis errors covariance matrix (Hessian). Barkmeijer et al. (1999) computed Hessian SVs and compared their characteristics with the routinely computed total-energy SVs. Their results did not show any significant change in the percentage of forecast error explained by the leading SVs; however, the structure can be different (Leutbecker et al. 2002).

In this study, the leading 10 SVs define the subspace where initial conditions are expected to be modified by the assimilation of dropsonde observations. The net effect of assimilating dropsonde data is represented by the difference between the analyses computed with and without them. Hereafter, this difference is named the dropsonde-induced analysis difference. In order to understand the role of targeted observations on the forecast error, the relationships between the dropsonde-induced analysis difference, the SV subspace, and the pseudoinverse have been investigated.

## 3. Methodology: SV-based diagnostics and forecast error

Two data-assimilation experiments have been performed: a control experiment (experiment C) with all the observations operationally used at ECMWF in which no dropsonde data were assimilated, and an experiment with NORPEX dropsonde observations also included (experiment *D*). Both experiments used the ECMWF 4DVAR system (Rabier et al. 2000; Mahfouf and Rabier 2000; Klinker et al. 2000) in a configuration with a T319L31 (T319 spectral triangular truncation with 31 vertical levels) high-resolution model integrations with full physical parameterization and T63L31 low-resolution minimizations with simplified physics (Mahfouf 1999). During the assimilation, the high-resolution 6-h forecast is compared with all available observations over a 6-h period while the analysis increments are computed at a T63L31 resolution. The C and the D analyses have been generated via a continuous data assimilation, and 2-day forecasts have been performed for the 10 NORPEX cases.

*f*

^{c}and

*f*

^{d}be the 48-h forecasts started from the initial analyses

*a*

^{c}

_{0}

*a*

^{d}

_{0}

*a*

^{c}and

*a*

^{d}be the C and D analyses verifying the 48-h forecasts (

*t*= 0 is the targeting time). The C and D forecast errors are given by

*da*

*a*

^{c}

*a*

^{d}

*a*

^{c}

Define *δa* as the T63 truncation of the analysis difference *da* = *a*^{c}_{0}*a*^{d}_{0}*T* centered on the region where the dropsondes were released (Pacific, 20°–60°N, 140°–240°E), expressed in terms of upper-air vorticity, divergence and temperature, and surface pressure components. The T63 spectral truncation and the exclusion of the specific humidity components make *δa* comparable to the SVs and the pseudoinverse. The geographical restriction to the area *T* guarantees that, for each case, the dropsonde-induced analysis perturbation *δa* is mostly determined by the dropsondes released on that precise day. Results discussed in the following sections will indicate that approximating *da* with *δa* has a negligible impact on forecast error evolution inside the FVA in 9 out of the 10 cases.

*δa*in two different ways to allow the forecast-error impact investigation of the dropsonde-induced analysis difference and its relationship with the leading SVs and the pseudoinverse initial perturbation. The first initial perturbation has been defined by decomposing

*δa*as

*δa*

*δa*

_{‖}

*δa*

_{⊥}

*δa*

_{‖}and

*δa*

_{⊥}are the components parallel and orthogonal to the phase-space direction defined by the pseudoinverse initial perturbation

*δp*(see appendix);

*δa*

_{‖}defines the first initial perturbation.

*δa*as

*δa*

_{SV}is the projection of

*δa*onto the subspace defined by the leading 10 SVs, and δ

*a*

_{SV}

*δa,*

*δa*

_{‖},

*δa*

_{SV}, and

*δa*

_{SV}

*δp,*the three initial perturbations

*δa*

_{‖},

*δa*

_{SV},

*δa*

_{SV}

*δa*have been subtracted from the C analysis to define five perturbed initial conditions:

forecasts

*f*^{1}and*f*^{c}are compared to assess the impact of the pseudoinverse perturbation;forecasts

*f*^{2}and*f*^{c}are compared to assess the impact of the*δa*component along the pseudoinverse;forecasts

*f*^{3}, , and*f*^{3}*f*^{c}are compared to assess the impact of the*δa*component that belongs to the subspace defined by the leading 10 SVs and its complement;forecasts

*f*^{4}and*f*^{c}are compared to assess the impact of the dropsonde-induced perturbation*δa.*

*e*

^{j}‖ is measured by the square root of the total energy norm inside the FVA (vertically integrated). The relative forecast error RE(·) is the change (in percentage) of the forecast error with respect to the control forecast: a negative RE(

*f*

^{j}) indicates that

*f*

^{j}has a smaller error than the control forecast.

### a. Impact on the forecast error of the initial perturbation δa and role of specific humidity

First, the impact of approximating *da* by *δa* is investigated. Figure 2 shows the relative forecast error of the forecast *f*^{d} started from the *D* analysis (*a*^{d}_{0}*a*^{c}_{0}*da*) and of the forecast *f*^{4} started from the control analysis plus the truncated and localized dropsonde-induced analysis perturbation (*a*^{4}_{0}*a*^{c}_{0}*δa*). Note that the difference between RE(*f*^{4}) and RE(*f*^{d}) is very small (smaller than 0.02 for five cases and between 0.02 and 0.04 for four cases) for all but one case, case 3, for which the difference is 0.15. Figure 2 also indicates that the time evolution of the T63 upper-air vorticity, divergence, temperature, and surface pressure components of the dropsonde-induced analysis difference are dominant with respect to the small scales (>T63) and to the specific humidity component. This is not surprising since in this study the analysis increments are computed at T63 resolution, while the higher T319 resolution is used only when the model trajectory and the observations are compared at the observation point.

Another perturbed analysis, *a*^{q}_{0}*δa* is the main reason for the difference between RE(*f*^{4}) and RE(*f*^{d}) in case 3. The analysis *a*^{q}_{0}*a*^{4}_{0}*a*^{d}_{0}*a*^{q}_{0}*δa* and the humidity analysis perturbation induced by the assimilation of the dropsonde data). Figure 2 shows that RE(*f*^{q}) is very similar to RE(*f*^{d}), with differences smaller than 2% for all cases including case 3, suggesting that the difference between RE(*f*^{4}) and RE(*f*^{d}) for this case is indeed due to the lack of humidity component in *δa.* The fact that the assimilation of temperature and wind profiles from targeted observations can induce changes in the specific humidity field is not surprising. In fact, although dropsonde specific humidity is not directly assimilated at ECMWF, mass and wind observations can generate humidity increments due to the dynamical link between temperature and humidity induced by the virtual temperature and by the action of the simplified linearized physics used in the minimization.

### b. Impact on the forecast error of the initial perturbations δa_{SV} and δa_{SV}

_{SV}

*δa*

_{SV}and

*δa*

_{SV}

*ψ*index,

Table 1 shows that on average *ψ* ≅ 6% with peak value of *ψ* = 9% for cases 1 and 6. This indicates that the projection of the dropsonde-induced difference onto the subspace defined by the leading 10 SVs is small (less than one-tenth of the total analysis difference).

Figure 3 shows the relative forecast errors RE(*f*^{4}) (started from *a*^{4}_{0}*a*^{c}_{0}*δa*), RE(*f*^{3}) (started from *a*^{3}_{0}*a*^{c}_{0}*δa*_{SV}), and RE(*f*^{3}*a*^{3}_{0}*a*^{c}_{0}*δa*_{SV}*f*^{4}) ∼ RE(*f*^{3}*δa*_{SV}*δa* on the forecast error. For three other cases (3, 6, and 9), RE(*f*^{4}) ∼ RE(*f*^{3})—that is, the impact of *δa* on the forecast error is determined by *δa*_{SV}—while for the last two cases (1 and 7) both components have a comparable contribution.

Overall, these results indicate that the component of the dropsonde-induced perturbation along the leading 10 SVs, *δa*_{SV} dominates the forecast evolution only in three cases, while the complement perturbation, *δa*_{SV}*f*^{3})|] is not much related to the amplitude *ψ*; one would have expected larger |RE(*f*^{3})| for larger *ψ,* but only case 6, with maximum amplitude, shows the largest forecast impact over the 10 cases.

### c. Impact on the forecast error of the pseudoinverse δp and δa_{‖}

Figure 4 shows the relative forecast errors RE(*f*^{4}) (started from *a*^{4}_{0}*a*^{c}_{0}*δa*), RE(*f*^{1}) (started from *a*^{1}_{0}*a*^{c}_{0}*δp*), and RE(*f*^{2}) (started from *a*^{2}_{0}*a*^{c}_{0}*δa*_{‖}) for the 10 campaigns. RE(*f*^{1}) < 0 indicates that the pseudoinverse *δp* always induces a forecast error reduction, and RE(*f*^{1}) < RE(*f*^{4}) indicates that the pseudoinverse *δp* always corrects the forecast error more than *δa.* The fact that RE(*f*^{1}) < 0 is qualitatively in agreement with the result expected if the pseudoinverse time evolution was linear, but it should be pointed out that there is a disagreement between the average forecast error reduction 〈RE(*f*^{1})〉 = 10% (Fig. 4) and the forecast error projection onto the leading 10 SVs that is on average 44% (Fig. 1). This discrepancy indicates that nonlinear processes have an important impact on the time evolution of the pseudoinverse. Other possible reasons for this disagreement can rely on the simplified physical processes described in the tangent and adjoint model version used to computed the SVs (e.g., moist processes and radiation are not included; Buizza and Montani 1999; Gilmour et al. 2001).

*δp*and the analysis difference

*δa,*two other indices have been defined. The first index

*ρ*is the ratio between

*δa*

_{‖}and the norm of

*δp*:

*ρ*values indicate that

*δa*

_{‖}points along the same (opposite) direction as the pseudoinverse. If

*ρ*= 1, then

*δa*

_{‖}has the same amplitude as the pseudoinverse perturbation. The second index is the angle

*α*between the two vectors

*δp*and

*δa*:

Table 2 shows that *ρ* is smaller than 0.1 and *α* is close to 90° for all but four cases (3, 4, 6, and 7): cases 6 and 7, which have |*ρ*| > 0.5, and cases 3 and 4, which have 0.5 > |*ρ*| > 0.1. Table 2 indicates that the dropsonde-induced analysis difference *δa* has a small component along the pseudoinverse and almost perpendicular direction. In other words, the dropsonde-induced difference and the pseudoinverse perturbation are similar in two cases (cases 6 and 7) and different or very different in the other eight cases.

Figure 4 shows the impact on the forecast error of *δa*_{‖} and *δp.* Consider first the four cases with |*ρ*| > 0.1 and smaller *α* (cases 3, 4, 6, and 7). Results show that for cases 6 and 7, characterized by the largest positive *ρ* (*ρ* = 0.60 and 0.58, respectively), RE(*f*^{2}) ∼ RE(*f*^{1}). For case 4 (*ρ* = 0.14), RE(*f*^{2}) ∼ 0.2 RE(*f*^{1}), while for case 3 (*ρ* = −0.28), RE(*f*^{2}) is about 3 times smaller and has the opposite sign of RE(*f*^{1}). For the other six cases characterized by |*ρ*| < 0.1 there is no correspondence between *ρ* and the forecast error impact of *δa*_{‖} and *δp.*

Despite the fact that clear-cut conclusions cannot be drawn from this set of results, the indication is that *δa* and *δp* have a similar impact on the forecast error when a large enough fraction of the dropsonde-induced analysis difference *δa* projects onto the pseudoinverse *δp,* say when *ρ* > 0.58 (2 out of 10 cases). Results also show that there is still a certain degree of agreement when 0.14 < |*ρ*| < 0.58 (2 out of 10 cases), but that no relationship can be found when |*ρ*| < 0.1.

### d. Dropsonde location

The results discussed in the previous sections have indicated that the dropsonde-induced analysis difference has a small component on the subspace spanned by the leading SVs, and that, on average, there is a very little agreement between the dropsonde-induced analysis difference and the pseudoinverse. One possible reason for these disagreements could be that the dropsondes were released in areas that did not coincide with the area of maximum concentration of the (analysis) SVs used in this study both to define *δa*_{SV} and *δp.* The analysis SV, which samples a very similar area as the forecast SV (cf. Fig. 10 in this paper with Fig. 5 of Majumdar et al. 2002), can be used to map the general location of maximum sensitivity of the real-time leading SVs.

The agreement between the locations of maximum SV concentration and the dropsonde has been quantified by the dropsonde location efficiency (DLE) index defined by the sum of the SV energy (weighted mean of total energy) at the observations locations divided by the sum of the SV energy over the area *T.* Large DLE values indicate that grid points with high average SV concentration are sampled (DLE = 1 if the dropsondes sampled the whole area identified by the leading SVs, and DLE = 0 if the dropsondes have been released outside the area sampled by the SVs). Figure 5 shows a scatterplot of the moduli of RE(*f*^{3}) as a function of DLE. The moduli of RE(*f*^{3}) are strongly correlated with DLE. In fact, despite the small sample size, the high correlation found (0.81) is significantly different from zero (*p* value less than 0.01). The regression line has a significant positive slope, 0.63, while the intercept is not significant (*p* value = 0.14). In conclusion, the data show a clear relationship. On average, dropsondes sample 2.3% of the area of maximum concentration identified by the analysis SVs, and in the most successful campaign case 6 (Fig. 5), DLE has a maximum value of 8%. From the scatterplot it can be seen that the impact of *δa*_{SV} is large for cases with large DLE.

## 4. Case studies

A detailed discussion of two cases is reported hereafter to give the reader a visual and more complete picture of the relationship between the dropsonde-induced perturbation *δa*; the pseudoinverse *δp*; and the three defined initial perturbations *δa*_{SV}, *δa*_{SV}*δa*_{‖}. Cases 5 and 6 have been selected because for both of them *δa* has a positive impact on the forecast error [i.e., it reduces the forecast error RE(*f*^{4}) < 0, see Fig. 2, but the impact depends on the evolution of different components. For case 5, the forecast error reduction (see Fig. 3) is due mainly to the evolution of the dropsonde-induced analysis component *δa*_{SV}*ρ* = 0.05; see section 3c), and case 6 as a typical case with a large positive projection (*ρ* = 0.6). All maps and their corresponding discussion refer to the 500-hPa geopotential height field.

### a. Case 5 (18 February)

On 18 February, 17 dropsondes were released in a flight mission from Honolulu. Figure 6a shows the dropsonde-induced analysis difference *δa,* and Fig. 6c shows the component *δa*_{SV} in the subspace spanned by the leading SVs. This comparison shows that *δa* and *δa*_{SV} are different (*ψ* = 6.4%; see Table 1), and *δa* is characterized by a 5-times deeper structure. It is interesting to compare the error of the 48-h forecast started from the perturbed initial conditions and valid on 20 February at 0000 UTC. Figure 6 shows the forecast absolute-error difference between |*e*^{4}| and |*e*^{c}| (Fig. 6b), |*e*^{3}| and |*e*^{c}| (Fig. 6d), and |*e*^{3}*e*^{c}| (Fig. 6e). The first thing to note is that the pattern and the intensity of the absolute-error differences shown in Figs. 6b and 6e are very similar and both rather different from Fig. 6d. This confirms the results shown in Fig. 3 that for this case, the evolution of *δa* and δ*a*_{SV}*δa*_{SV} acts to increase it. The impact on the forecast error inside the FVA has been quantified by computing the normalized difference between the rms error (rmse) [i.e., (rmse^{j} − rmse^{c})/rmse^{c}, for *j* = 3, *j* = 4 (Fig. 6b), 3% for *j* = 3 (Fig. 6d), and −12% for *j* =

Figure 7a shows *δp* and Fig. 7c shows *δa*_{‖}. These two initial perturbations are (by construction) identical in shape but have an opposite sign and a very different magnitude, *δa*_{‖} being about 15 times smaller (*ρ* = −0.05; see Table 2). Note that the pseudoinverse (Fig. 7a) is very different from the dropsonde-induced analysis difference *δa* (Fig. 6a) and from *δa*_{SV} (Fig. 6c). Figure 7 shows the absolute-error difference between |*e*^{1}| and |*e*^{c}|, and |*e*^{2}| and |*e*^{c}|. Figure 7b shows that the pseudoinverse reduces the forecast error over the whole FVA (gray shaded contours), while *δa*_{‖} slightly increases the forecast error (Fig. 7d), in agreement with the fact that *ρ* is negative and with the RE(·) results shown in Fig. 4. The normalized difference between the rmse is −35% for *f*^{1} (Fig. 7b) and 2% for *f*^{2} (Fig. 7d).

Figure 8a shows the area of maximum SV concentration, defined as the average of the SV total energy weighted by the amplification factor, and the dropsondes' locations. It can be seen that the dropsondes sample only a small region of the downstream part of the area of maximum SV concentration. For this case, DLE = 2.3% (Fig. 5).

### b. Case 6 (20 February)

On 20 February, 40 dropsondes were released from Hawaii and west of Cape Mendocino. The western flight track was selected by NRL and the eastern track by NCEP. Sondes were deployed on the anticyclonic shear side of the upper-level jet, with a good definition of gradients across the lower-tropospheric baroclinic zone (R. H. Langland 2001, personal communication). Figures 9 and 10 are the equivalent of Figs. 6 and 7 but for this case. Figure 9a shows that *δa* is characterized by an elongated pattern in the subtropical steering flow, with a first positive maximum centered on the date line, a dipole structure around 150°W, and a final maximum close to the eastern border of the target area *T.* Figure 9c shows that *δa*_{SV} is smaller in amplitude than *δa* (contours are 10 times smaller than in Fig. 9a) with one maximum east of the date line in correspondence with the first *δa* maximum and an elongated dipole structure close to the east border of the target area *T.* Consider now the 48-h forecast valid on 22 February at 0000 UTC. The differences between the absolute errors |*e*^{4}| − |*e*^{c}| (Fig. 9b) and |*e*^{3}| − |*e*^{c}| (Fig. 9d) are very similar, and both dissimilar to |*e*^{3}*e*^{c}| (Fig. 9e). The normalized differences of the rmse inside the FVA are −31% for *f*^{4} (Fig. 9b), −22% for *f*^{3} (Fig. 9d), and −8% for *f*^{3}

Figure 10a shows the pseudoinverse, and Fig. 10c shows the analysis component along the pseudoinverse. The two patterns are identical in shape and sign but have different amplitude, *δa*_{‖} being about 2 times smaller (*ρ* = 0.60; see Table 2). Note that for this case the pseudoinverse *δp* (Fig. 10a) and the dropsonde-induced analysis difference *δa* (Fig. 9a) both have a maximum, east of the date line, and that *δp* (Fig. 10a) and *δa*_{SV} (Fig. 9c) are very similar in shape. Figure 10b shows the difference between |*e*^{1}| and |*e*^{c}|, and Fig. 10d shows the difference between |*e*^{2}| and |*e*^{c}|. These forecast-error differences are very similar in shape, with normalized rmse differences inside the FVA of −27% for *f*^{1} (Fig. 10b) and −17% for *f*^{2} (Fig. 10d).

Figure 8b shows the area of maximum SV concentration and the dropsondes' locations. The dropsondes sample one of the two maxima of the SV location. Compared to case 5 (Fig. 8a), there is better agreement between the dropsondes' locations and the area of maximum SV location (DLE = 8%; Fig. 5).

## 5. Conclusions

Targeted observations are designed to reduce initial uncertainties in the target region *T* and to reduce the forecast error inside the forecast verification area (FVA). However, mixed forecast results have been obtained from the assimilation of targeted observations during 10 cases of the NORPEX field experiment. Results have in fact indicated that on average the assimilation of targeted data lead to ∼2% reduction of the forecast error measured in terms of integrated total energy, with a peak reduction of 9% (for 2 of the 10 cases). These results cannot be directly compared to the 10% average value obtained by Szunyogh et al. (2000) or to the 15% obtained by Montani et al. (1999) because they were based on single-level fields and not on vertically integrated measures as here. Moreover, Szunyogh et al. (2000) and Montani et al. (1999) results refer to 1999, while this study refers to 1998, and the 2 yr are known to be characterized by very different circulation regimes. In 1998 there was an El Niño, characterized by a predominantly zonal flow with a very strong upper-level jet; however, in 1999 there was an El Niña, characterized by a blocked circulation over the western Pacific, a deep trough over Japan, and a more pronounced ridge centered on the Pacific.

This paper has investigated possible reasons for the small or negative impact of the targeted observations using a SV-based diagnostic technique. Singular vectors identify the phase–space directions along which perturbation growth is maximized during a finite-time interval, and can be used to define a set of diagnostic tools and concepts. For each case, the leading 10 analysis SVs—that is, SVs evolving from the analysis time and growing during a 48-h time interval to maximize the total energy norm inside the FVA—have been computed with a T63L31 resolution model (spectral triangular truncation T63 and 31 vertical levels). The choice of a T63L31 resolution is a compromise between the need of resolving small scales and the limitation of computer processing time. The FVA has been set to be 30°–60°N and 100°–130°W.

In the first part of this work, the percentage of forecast error explained by a variable number of leading singular vectors has been computed. Results have shown that 44% of the forecast error inside the FVA can be explained by using the 10 leading SVs, and that the use of an additional 10 SVs only adds a further 3% to this percentage. Following this result, only the leading 10 SVs have been used to define the pseudoinverse initial perturbation, which can correct most of the forecast error inside the FVA. The fact that the leading 10 SVs define dynamically important directions has been confirmed as the pseudoinverse initial perturbation when added to the control analysis has always reduced the forecase error (on average 10% when forecast error is measured in terms of vertically integrated total energy; see also Buizza et al. 1997 and Gelaro et al. 1998). The pseudoinverse initial perturbation has been used as a reference in this study.

To investigate the relationship between the dropsonde-induced analysis difference *δa,* the leading 10 SVs, and the pseudoinverse *δp,* three initial perturbations have been defined: the dropsonde-induced analysis difference component that belongs to the subspace defined by the first 10 leading SVs (*δa*_{SV}), its complement (*δa*_{SV}*δa* component along the pseudoinverse (*δa*_{‖}). Three indices have been defined to measure the similarity between the five initial perturbations *δa,* *δa*_{SV}, *δa*_{SV}*δa*_{‖}, and *δp.* All these initial perturbations have been defined in terms of the model's vorticity, divergence and temperature and surface pressure fields, thus excluding the humidity field. Changes in the humidity fields due to the assimilation of wind and temperature from dropsondes have been shown not to affect the forecast error in all but case 3, for which humidity increments were shown to have increased the forecast error by 15%. Once the initial perturbations had been defined, 48-h forecasts were run from the perturbed initial conditions and the forecast were compared.

Results have shown that on average only 6% of the dropsonde-induced perturbation *δa* projects onto the subspace spanned by the leading 10 SVs (*ψ**δa* lies in the subspace orthogonal to the 10 leading SVs. Considering the impact on the forecast error it has been shown that *δa*_{SV} is dominant in 3 and *δa*_{SV}

Results have also indicated that in 6 out of 10 cases less than 8% of the dropsonde-induced perturbation projects onto the pseudoinverse (*ρ* < 0.08; see Table 2), in two cases the projection was ∼25%, and in two other cases it was ∼60% (0.14 < *ρ* < 0.60; see Table 2). Consistently, the two vectors have been almost orthogonal (*α* ∼ 90) in 6 out of 10 cases. In the two cases with the largest projections (*ρ* = 0.58 and *ρ* = 0.60), the forecast-error reduction induced by the pesudoinverse and the dropsonde-induced perturbation have been very similar.

One of the reasons of the small projection of the dropsonde-induced analysis perturbation onto the leading 10 SVs is the limited degree of overlap between the region spanned by the dropsondes and the region of maximum SV concentration. Only case 6, which is characterized by the largest agreement between the SV and the dropsonde location (Figs. 5 and 8) show the closest agreement between the forecast error reduction obtained by correcting the initial condition by the pseudoinverse and by the dropsonde-induced analysis perturbation (Fig. 4).

In four cases, the pseudoinverse and the analysis component along it had different sign. In particular, case 3 with a quite large DLE (4%) and *ρ* (−28%) had an opposite forecast impact because of the opposite sign of the two perturbations.

This can be due to the fact that the effect of the observations on the analysis depends on properties of the assimilation system that are not considered when computing the leading total-energy SVs (e.g., the analysis error covariance matrix, which defines the weight the background and the observation have in the analysis). A way to include properties of the data assimilation system into an SV computation was suggested by Barkmeijer et al. (1999), who proposed to use an analysis error matrix in the generalized SV computation. Gelaro et al. (2002) indeed showed that using this norm leads to an increased similarity between the phase-space of the system spanned by data assimilation and the by the leading SVs during targeted cases, but no conclusions were drawn on the impact on the forecast error.

A promising new way to use analysis error information to define target areas has been proposed by Baker and Daley (2000) and Doerenbecher and Bergot (2001) and is based on the forecast sensitivity to the observations. Such a technique determines when the forecast is sensitive to the background field or to the observations or to both, avoiding mis-sampling and inefficient use of extra observations. Work along this line should be encouraged.

## Acknowledgments

We are very grateful to two anonymous referees for their very helpful comments. We thank Erik Andersson for improving the manuscript. The experimentation was made possible thanks to the technical support of Jan Haseler. The figures were skillfully improved by Rob Hine.

## REFERENCES

Baker, N. L., and R. Daley, 2000: The observation-targeting problem.

,*Quart. J. Roy. Meteor. Soc.***126****,**1431–1454.Barkmeijer, J., R. Buizza, and T. N. Palmer, 1999: 3D-Var Hessian singular vectors and their potential use in the ECWMF Ensemble Prediction System.

,*Quart. J. Roy. Meteor. Soc.***125****,**2333–2351.Bishop, C., and Z. Toth, 1999: Ensemble transformation and adaptive observations.

,*J. Atmos. Sci.***56****,**1748–1765.Bishop, C., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects.

,*Mon. Wea. Rev.***129****,**420–436.Buizza, R., 1994: Sensitivity of optimal unstable structures.

,*Quart. J. Roy. Meteor. Soc.***120****,**429–451.Buizza, R., and T. N. Palmer, 1995: The singular vector structure of the atmospheric global circulation.

,*J. Atmos. Sci.***52****,**1434–1456.Buizza, R., and A. Montani, 1999: Targeting observations using singular vectors.

,*J. Atmos. Sci.***56****,**2965–2985.Buizza, R., R. Gelaro, F. Molteni, and T. N. Palmer, 1997: The impact of increased resolution on predictability studies with singular vectors.

,*Quart. J. Roy. Meteor. Soc.***123****,**1007–1033.Doerenbecher, A., and T. Bergot, 2001: Sensitivity to observations applied to FASTEX cases.

,*Nonlinear Process Geophys.***8****,**467–481.Ehrendorfer, M., and J. J. Tribbia, 1997: Optimal prediction of forecast error covariances using singular vectors.

,*J. Atmos. Sci.***54****,**286–313.Emanuel, K., and Coauthors. 1995: Report of the first prospectus development team of the U.S. Weather Research Program to NOAA and the NSF.

,*Bull. Amer. Meteor. Soc.***76****,**1194–1208.Gelaro, R., R. Buizza, T. N. Palmer, and E. Klinker, 1998: Sensitivity analysis of forecast error and the construction of optimal perturbations using singulars vectors.

,*J. Atmos. Sci.***55****,**1012–1037.Gelaro, R., R. H. Langland, G. D. Rohaly, and T. E. Rosmond, 1999: An assessment of the singular-vectors approach to targeted observing using the FASTEX data set.

,*Quart. J. Roy. Meteor. Soc.***125****,**3299–3327.Gelaro, R., T. E. Rosmond, and R. Daley, 2002: Singular vector calculations with an analysis error variance metric.

,*Mon. Wea. Rev.***130****,**1166–1186.Gilmour, I., L. A. Smith, and R. Buizza, 2001: Linear regime duration: Is 24 hours a long time in synoptic weather forecasting?

,*J. Atmos. Sci.***58****,**3525–3539.Jaubert, G., C. Piriou, S. M. Loehrer, A. Petitpa, and J. M. Moore, 1999: Development and quality control of the FASTEX data archive.

,*Quart. J. Roy. Meteor. Soc.***125****,**3165–3188.Klinker, E., F. Rabier, G. Kelly, and J-F. Mahfouf, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. III: Experimental results and diagnostics with operational configuration.

,*Quart. J. Roy. Meteor. Soc.***126****,**1191–1218.Langland, R. H., and G. D. Rohaly, 1996: Adjoint-based targeting of observations for FASTEX cyclones. Preprints,

*Seventh Conf. on Mesoscale Processes,*Reading, United Kingdom, Amer. Meteor. Soc., 369–371.Langland, R. H., R. Gelaro, G. D. Rohaly, and M. A. Shapiro, 1999: Targeted observations in FASTEX: Adjoint based targeting procedures and data impact experiments in IOP17 and IOP18.

,*Quart. J. Roy. Meteor. Soc.***125****,**3241–3270.Leutbecker, M., J. Barkmeijer, T. N. Palmer, and A. J. Thorpe, 2002: Potential improvement to forecasts of two severe storms using targeted observations.

,*Quart. J. Roy. Meteor. Soc.***128****,**1641–1670.Mahfouf, J. F., 1999: Influence of physical processes on the tangent linear approximation.

,*Tellus***51A****,**147–166.Mahfouf, J. F., and F. Rabier, 2000: The ECMWF operational implementation of four dimensional variational assimilation. Part II: Experimental results with improved physics.

,*Quart. J. Roy. Meteor. Soc.***126****,**1171–1190.Majumdar, S. J., C. Bishop, R. Buizza, and R. Gelaro, 2002: A comparison of ETKF targeting guidance with ECMWF and NRL TE-SVs targeting guidance.

,*Quart. J. Roy. Meteor. Soc.***128****,**2527–2550.Molenti, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF Ensemble Prediction System: Methodology and validation.

,*Quart. J. Roy. Meteor. Soc.***122****,**73–119.Montani, A., A. J. Thorpe, R. Buizza, and P. Unden, 1999: Forecast skill of the ECMWF model using targeted observations during FASTEX.

,*Quart. J. Roy. Meteor. Soc.***125****,**3219–3240.Palmer, T. N., R. Gelaro, J. Barkmeijer, and R. Buizza, 1998: Singular vectors, metrics, and adaptive observations.

,*J. Atmos. Sci.***55****,**633–653.Pu, Z-X., and E. Kalnay, 1999: Targeting observations with the quasi-inverse linear and adjoint NCEP global models: Performance during FASTEX.

,*Quart. J. Roy. Meteor. Soc.***125****,**3329–3337.Pu, Z-X., E. Kalnay, J. Sela, and I. Szunyogh, 1997: Sensitivity of forecast errors of initial conditions with a quasi-inverse linear method.

,*Mon. Wea. Rev.***125****,**2479–2503.Rabier, F., E. Klinker, P. Courtier, and A. Hollingsworth, 1996: Sensitivity of forecast errors to initial conditions.

,*Quart. J. Roy. Meteor, Soc.***122****,**121–150.Rabier, F., H. Järvinen, E. Klinker, J-F. Mahfouf, and A. Simmons, 2000: The ECMWF operational implementation of four dimensional variational assimilation. Part I: Experimental results with simplified physics.

,*Quart. J. Roy. Meteor. Soc.***126****,**1143–1170.Ralph, F. M., and Coauthors. 1998: The use of tropospheric profiling in CALJET. Preprints,

*Fourth Int. Symp. on Tropospheric Profiling: Needs and Technologies.*Snowmass, CO, University of Colorado, 258–260.Snyder, C., 1996: Summary of an informal workshop on adaptive observations and FASTEX.

,*Bull. Amer. Meteor. Soc.***77****,**953–961.Szunyogh, I., Z. Toth, S. J. Majumdar, R. Morss, C. Bishop, and S. Lord, 1999: Ensemble-based targeted observations during NORPEX. Preprints,

*Third Symp. on Integrated Observing Systems,*Dallas, TX, Amer. Meteor. Soc., 74–78.Szunyogh, I., Z. Toth, R. E. Morss, S. J. Majumdar, B. J. Etherton, and C. H. Bishop, 2000: The effect of targeted dropsonde observations during the 1999 Winter Storm Reconnaissance Program.

,*Mon. Wea. Rev.***128****,**3520–3537.Szunyogh, I., Z. Toth, A. V. Zimin, S. J. Majumdar, and A. Persson, 2002: Propagation of the effect of targeted observations: The 2000 Winter Storm Reconnaissance Program.

,*Mon. Wea. Rev.***130****,**1144–1165.Thorpe, A. J., and M. A. Shapiro, 1995: FASTEX: Fronts and Atlantic Storm Track Experiment. The science plan. Centre National de Recherces Meteorologiques FASTEX Project Office Tech. Report, 25 pp.

Toth, Z., I. Szunyogh, S. Majumdar, R. Morss, B. Etherton, C. Bishop, and S. Lord, 1999: The 1999 Winter Storm Reconnaissance program. Preprints,

*13th Conf. on Numerical Weather Prediction,*Denver, CO, Amer. Meteor. Soc., 27–32.Toth, Z., and Coauthors. 2002: Adaptive observations at NCEP: Past, present and future. Preprints,

*Symp. on Observations, Data Assimilation, and Probabilistic Prediction,*Orlando, FL, Amer. Meteor. Soc., 185–190.

## APPENDIX

### Mathematical Definitions

#### Scalar product and energy norm

*N*of vectors

**x**whose elements

*x*

_{j}are the upper-level vorticity, divergence, temperature, and logarithm of surface pressure at different latitude, longitude, and vertical coordinates. The total energy norm is defined as

**E**= diag(

*E*

_{j}) is a total energy weight matrix (Buizza and Palmer 1995) and

*x*

_{j}is the

*j*th component of the state vector

**x**.

#### Local projection operator

*W*is defined as

*W*

*λ,*

*φ*

*x*

*w*

*λ*

*w*

*φ*

*x,*

*λ*and

*φ*are the latitude and longitude coordinates,

**x**is a state vector, and

*w*(

*τ*) is the following weight function:

*λ*

_{1}= 20°N,

*λ*

_{2}= 60°N) and (

*φ*

_{1}= 140°E,

*φ*

_{2}= 240°E), and the two pairs of coordinate (

*λ*

_{1}= 30°N,

*λ*

_{2}= 60°N) and (

*φ*

_{1}= 230°E,

*φ*

_{2}= 260°E) define the geographical domain that coincide with the FVA (see text).

#### Singular vector definition

**x**

_{0}be a vector representing a model initial state, and

**x**its 48-h linear evolution:

**x**

*L*

**x**

_{0}

*L*is the tangent model forward propagator. Using the local projection operator

**W**, the total energy norm can be computed inside a specific area (local energy):

*W*

**x**

*E**W*

**x**

*WL*

**x**

_{0}

*E**WL*

**x**

_{0}

*m*vectors

*υ*

_{i}=

*L*

*υ*^{0}

_{i}

*υ*

_{i}are ordered with decreasing singular value

*σ*

_{i}:

In this study, *m* = 10 singular vectors are computed using a simplified linear scheme simulating surface drag and vertical diffusion at T63 resolution and 31 model levels.

#### Pseudoinitial perturbation

*δe*

^{c}, the projection of the forecast error

*e*

^{c}onto the first 10 singular vectors,

*δe*

^{c}. This perturbation can be written in terms of the initial-time singular vectors as follows:

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total-energy norm, normalized by the control forecast error, and averaged over the FVA [Eq. (6)] for the 10 NORPEX cases. Black column [RE(*f*^{d})], white column [RE(*f*^{4})], and gray column [RE(*f*^{q})] show the impact of different components of the drop-induced analysis increments, where the superscript *d* denotes for full drop-induced analysis increments, 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, and *q* is as 4, but with the humidity field of D analysis

Citation: Journal of the Atmospheric Sciences 60, 16; 10.1175/1520-0469(2003)060<1927:FSOTOA>2.0.CO;2

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total-energy norm, normalized by the control forecast error, and averaged over the FVA [Eq. (6)] for the 10 NORPEX cases. Black column [RE(*f*^{d})], white column [RE(*f*^{4})], and gray column [RE(*f*^{q})] show the impact of different components of the drop-induced analysis increments, where the superscript *d* denotes for full drop-induced analysis increments, 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, and *q* is as 4, but with the humidity field of D analysis

Citation: Journal of the Atmospheric Sciences 60, 16; 10.1175/1520-0469(2003)060<1927:FSOTOA>2.0.CO;2

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total-energy norm, normalized by the control forecast error, and averaged over the FVA [Eq. (6)] for the 10 NORPEX cases. Black column [RE(*f*^{d})], white column [RE(*f*^{4})], and gray column [RE(*f*^{q})] show the impact of different components of the drop-induced analysis increments, where the superscript *d* denotes for full drop-induced analysis increments, 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, and *q* is as 4, but with the humidity field of D analysis

Citation: Journal of the Atmospheric Sciences 60, 16; 10.1175/1520-0469(2003)060<1927:FSOTOA>2.0.CO;2

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total-energy norm, normalized by the control forecast error and averaged over the FVA [Eq. (6)]. Black column, [RE(*f*^{4})], white column [RE(*f*^{3}*f*^{3})], show the impact of different components of the drop-induced analysis increments, where the superscript 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, 3 denotes for the drop-induced analysis increments projecting onto the SVs subspace, and

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total-energy norm, normalized by the control forecast error and averaged over the FVA [Eq. (6)]. Black column, [RE(*f*^{4})], white column [RE(*f*^{3}*f*^{3})], show the impact of different components of the drop-induced analysis increments, where the superscript 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, 3 denotes for the drop-induced analysis increments projecting onto the SVs subspace, and

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total-energy norm, normalized by the control forecast error and averaged over the FVA [Eq. (6)]. Black column, [RE(*f*^{4})], white column [RE(*f*^{3}*f*^{3})], show the impact of different components of the drop-induced analysis increments, where the superscript 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, 3 denotes for the drop-induced analysis increments projecting onto the SVs subspace, and

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total energy norm, normalized by the control forecast error, and averaged over the FVA [Eq. (6)]. Black column [RE(*f*^{4})], white column [RE(*f*^{1})], and gray column [RE(*f*^{2})] show the impact of different components of the drop-induced analysis increments, where the superscript 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, 1 denotes for pseudoinverse perturbation, and 2 denotes for the drop-induced analysis increments projecting onto pseudoinverse

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total energy norm, normalized by the control forecast error, and averaged over the FVA [Eq. (6)]. Black column [RE(*f*^{4})], white column [RE(*f*^{1})], and gray column [RE(*f*^{2})] show the impact of different components of the drop-induced analysis increments, where the superscript 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, 1 denotes for pseudoinverse perturbation, and 2 denotes for the drop-induced analysis increments projecting onto pseudoinverse

Normalized forecast error RE(·) with error measured in terms of a vertically integrated total energy norm, normalized by the control forecast error, and averaged over the FVA [Eq. (6)]. Black column [RE(*f*^{4})], white column [RE(*f*^{1})], and gray column [RE(*f*^{2})] show the impact of different components of the drop-induced analysis increments, where the superscript 4 denotes for the drop-induced analysis increments inside *T* at T63 resolution, 1 denotes for pseudoinverse perturbation, and 2 denotes for the drop-induced analysis increments projecting onto pseudoinverse

Modulo of the relative forecast error |RE(*f*^{3})| vs DLE. Labels indicate the NORPEX campaign number, and the regression *Y* = 0.63*X* + 0.008 is the solid line

Modulo of the relative forecast error |RE(*f*^{3})| vs DLE. Labels indicate the NORPEX campaign number, and the regression *Y* = 0.63*X* + 0.008 is the solid line

Modulo of the relative forecast error |RE(*f*^{3})| vs DLE. Labels indicate the NORPEX campaign number, and the regression *Y* = 0.63*X* + 0.008 is the solid line

Case 5, initial-state 0000 UTC 18 Feb 1998, 500 geopotential height fields. (a) Perturbations *δa,* contours every 2 m. (b) Differences between the 2-day forecast absolute error of *f*^{4} and *f*^{c} started from *a*^{4}_{0}*δa*_{SV}, contours every 0.4 m. (d) Difference between the 2-day forecast absolute error of *f*^{3} and *f*^{c}, contour every 6 m. (e) Difference between the 2-day forecast absolute error of *f*^{3}*f*^{c}, contour every 6 m. Shaded contours are negative

Case 5, initial-state 0000 UTC 18 Feb 1998, 500 geopotential height fields. (a) Perturbations *δa,* contours every 2 m. (b) Differences between the 2-day forecast absolute error of *f*^{4} and *f*^{c} started from *a*^{4}_{0}*δa*_{SV}, contours every 0.4 m. (d) Difference between the 2-day forecast absolute error of *f*^{3} and *f*^{c}, contour every 6 m. (e) Difference between the 2-day forecast absolute error of *f*^{3}*f*^{c}, contour every 6 m. Shaded contours are negative

Case 5, initial-state 0000 UTC 18 Feb 1998, 500 geopotential height fields. (a) Perturbations *δa,* contours every 2 m. (b) Differences between the 2-day forecast absolute error of *f*^{4} and *f*^{c} started from *a*^{4}_{0}*δa*_{SV}, contours every 0.4 m. (d) Difference between the 2-day forecast absolute error of *f*^{3} and *f*^{c}, contour every 6 m. (e) Difference between the 2-day forecast absolute error of *f*^{3}*f*^{c}, contour every 6 m. Shaded contours are negative

Case 5, initial state 0000 UTC of 18 Feb 1998, 500 geopotential height fields. (a) Perturbation *δp,* contours every 1.2 m. (b) Difference between the 2-day forecast absolute error of *f*^{1} and *f*^{c} started from *a*^{1}_{0}*δa*_{‖}, contours every 0.08 m. (d) Difference between the 2-day forecast absolute error of *f*^{2} and *f*^{c}, contour every 1 m. Shaded contours are negative

Case 5, initial state 0000 UTC of 18 Feb 1998, 500 geopotential height fields. (a) Perturbation *δp,* contours every 1.2 m. (b) Difference between the 2-day forecast absolute error of *f*^{1} and *f*^{c} started from *a*^{1}_{0}*δa*_{‖}, contours every 0.08 m. (d) Difference between the 2-day forecast absolute error of *f*^{2} and *f*^{c}, contour every 1 m. Shaded contours are negative

Case 5, initial state 0000 UTC of 18 Feb 1998, 500 geopotential height fields. (a) Perturbation *δp,* contours every 1.2 m. (b) Difference between the 2-day forecast absolute error of *f*^{1} and *f*^{c} started from *a*^{1}_{0}*δa*_{‖}, contours every 0.08 m. (d) Difference between the 2-day forecast absolute error of *f*^{2} and *f*^{c}, contour every 1 m. Shaded contours are negative

Singular vector location (defined as the average total energy weighted by the amplification factor) and dropsondes' locations for (a) case 5 (18 Feb) and (b) case 6 (20 Feb)

Singular vector location (defined as the average total energy weighted by the amplification factor) and dropsondes' locations for (a) case 5 (18 Feb) and (b) case 6 (20 Feb)

Singular vector location (defined as the average total energy weighted by the amplification factor) and dropsondes' locations for (a) case 5 (18 Feb) and (b) case 6 (20 Feb)

Case 6, initial state 0000 UTC of 20 Feb 1998. (a) Perturbation *δa,* contours every 5 m. (b) Differences between the 2-day forecast absolute error of *f*^{4} and *f*^{c} standard from *a*^{4}_{0}*δa*_{SV}, contours every 0.5 m. (d) Difference between the 2-day forecast absolute error of *f*^{3} and *f*^{c}, contours every 4 m. (e) Difference between the 2-day forecast absolute error of *f*^{3}*f*^{c}, contour every 4 m. Shaded contours are negative

Case 6, initial state 0000 UTC of 20 Feb 1998. (a) Perturbation *δa,* contours every 5 m. (b) Differences between the 2-day forecast absolute error of *f*^{4} and *f*^{c} standard from *a*^{4}_{0}*δa*_{SV}, contours every 0.5 m. (d) Difference between the 2-day forecast absolute error of *f*^{3} and *f*^{c}, contours every 4 m. (e) Difference between the 2-day forecast absolute error of *f*^{3}*f*^{c}, contour every 4 m. Shaded contours are negative

Case 6, initial state 0000 UTC of 20 Feb 1998. (a) Perturbation *δa,* contours every 5 m. (b) Differences between the 2-day forecast absolute error of *f*^{4} and *f*^{c} standard from *a*^{4}_{0}*δa*_{SV}, contours every 0.5 m. (d) Difference between the 2-day forecast absolute error of *f*^{3} and *f*^{c}, contours every 4 m. (e) Difference between the 2-day forecast absolute error of *f*^{3}*f*^{c}, contour every 4 m. Shaded contours are negative

Case 6, initial state 0000 UTC of 20 Feb 1998. (a) Perturbation *δp,* contours every 1 m. (b) Difference between the 2-day forecast absolute error of *f*^{1} and *f*^{c} started from *a*^{1}_{0}*δa*_{‖}, contours every 0.5 m. (d) Difference between the 2-day forecast absolute error of *f*^{2} and *f*^{c}, contours every 3 m. Shaded contours are negative

Case 6, initial state 0000 UTC of 20 Feb 1998. (a) Perturbation *δp,* contours every 1 m. (b) Difference between the 2-day forecast absolute error of *f*^{1} and *f*^{c} started from *a*^{1}_{0}*δa*_{‖}, contours every 0.5 m. (d) Difference between the 2-day forecast absolute error of *f*^{2} and *f*^{c}, contours every 3 m. Shaded contours are negative

Case 6, initial state 0000 UTC of 20 Feb 1998. (a) Perturbation *δp,* contours every 1 m. (b) Difference between the 2-day forecast absolute error of *f*^{1} and *f*^{c} started from *a*^{1}_{0}*δa*_{‖}, contours every 0.5 m. (d) Difference between the 2-day forecast absolute error of *f*^{2} and *f*^{c}, contours every 3 m. Shaded contours are negative

Amplitude of δ*a*_{SV} relative to the dropsonde-induced perturbation. [Cases 1 and 6 (boldfaced) show a peak value of ψ = 9%]

Amplitude and angle between the vectors δ*a* and δ*p.* [Only cases 6 and 7 (boldfaced) show the similarity between the dropsonde-induced difference and the pseudoinverse perturbation]