## 1. Introduction

Numerical weather prediction (NWP) is the process by which a future state of the atmosphere is estimated by numerically integrating a nonlinear dynamical model forward in time from an estimate of the current state of the atmosphere. Many uncertainties may be associated with this process. First, the numerical model is imperfect due to insufficient knowledge or imperfect parameterization of the physical processes involved and imperfect numerical solution of the governing partial differential equations. Second, the initial condition of the dynamical model, usually called an analysis, is imperfect. This initial condition is produced by a data assimilation algorithm, which combines observations and prior information (typically a previous numerical forecast), taking into account the uncertainties associated with each. Understanding the characteristics of the uncertainties involved in NWP has been recognized as a fundamental aspect of producing good forecasts.

One way to reduce the forecast error is to improve the initial analysis by enhancing the type and number of observations. The goal of adaptive (or targeted) observations is to decrease the forecast error by placing observations in regions where additional observations are expected to improve a forecast of interest. These regions may be considered “sensitive” in the sense that changes to the initial conditions in these regions are expected to have a larger effect on a particular measure of forecast skill than changes in other regions. In order to identify sensitive regions for adaptive observations, information about both the uncertainty in the initial conditions and the dynamics of the flow have been used. Currently there are strategies based only on uncertainty information (uncertainty-based adaptive observation strategies), strategies based only on dynamics information (dynamics-based adaptive observation strategies), and strategies using both uncertainty information and dynamics (joint uncertainty–dynamics adaptive observation strategies). The ideal adaptive observation strategy would include information about both uncertainty and dynamics.

Uncertainty-based targeting strategies use estimates of analysis and forecast errors to identify regions for adaptive observations. The analysis and forecast errors can be estimated using, for example, ensemble techniques (e.g., Lorenz and Emanuel 1998; Hansen and Smith 2000; Morss et al. 2001). Berliner et al. (1999) provide the theory that enables the calculation of the effect of adaptive observations on analysis and forecast error statistics under the assumption of a normal probability distribution and linear evolution of forecast errors. Using that statistical theory, Hamill and Snyder (2002) demonstrate that the error covariance information from the ensemble Kalman filter can be used to design adaptive observing networks. The adaptive observation strategy used in Hamill and Snyder (2002) is an example of a joint uncertainty–dynamics adaptive observation strategy. The ensemble transform (Bishop and Toth 1999) or ensemble transform Kalman filter (Bishop et al. 2001) are also joint uncertainty–dynamics adaptive observation strategies, since they consider both uncertainty information and dynamics.

For dynamics-based adaptive observation strategies, potential vorticity (PV), gradient sensitivity, and singular vectors (SVs) are the tools that have been proposed. Potential verticity analysis for adaptive observations identifies sensitive regions subjectively by following the precursors of developing atmospheric systems, which are typically located near large quasi-horizontal PV gradients (e.g., Snyder 1996); however, this adaptive strategy has not been tested. The gradient sensitivity is the gradient of some forecast measure with respect to the model control variables, that is, initial conditions, boundary conditions, and model parameters (e.g., Errico 1997), or to the observations (Baker and Daley 2000). Gradient sensitivity strategies for adaptive observations have been tested, with some success (e.g., Bergot 1999; Bergot et al. 1999; Pu and Kalnay 1999).

Singular vectors, are those perturbations that amplify the most rapidly over a specified time period, given an initial norm, a final norm, a basic state, and a forecast model. Because of the rapidly growing property of SVs, they have been used to identify regions of large sensitivity to small perturbations for adaptive observations (e.g., Palmer et al. 1998). The sensitive regions indicated by the SVs depend on the norm chosen: lower to midtroposphere for the energy and streamfunction variance norm (e.g., Mukougawa and Ikeda 1994; Buizza and Palmer 1995; Hartmann et al. 1995; Hoskins et al. 2000; Morgan 2001), upper and lower boundaries for the potential enstrophy norm (e.g., Kim and Morgan 2002), and near tropopause for the analysis error covariance metric (AECM; e.g., Barkmeijer et al. 1998; Hamill et al. 2003). The most appropriate norm at the initial time to calculate SVs for adaptive observations is AECM, since in a tangent linear framework, the AECM SVs evolve into the eigenvectors of the forecast error covariance matrix at the later time (e.g., Ehrendorfer and Tribbia 1997). The AECM SVs have been used to construct initial perturbations for ensemble forecasts (e.g., Barkmeijer et al. 1998, 1999; Hamill et al. 2003) but have not been used for adaptive observations. Using an approach taken by Lorenz and Emanuel (1998), Hansen and Smith (2000) used the analysis error variance metric to calculate SVs for targeting. Singular vectors using metrics that do not incorporate analysis error information, such as an energy norm, have been tested for adaptive observations, producing moderately encouraging results (e.g., Bergot 1999; Bergot et al. 1999; Buizza and Montani 1999; Gelaro et al. 1999; Langland et al. 1999; Montani et al. 1999).

Because both gradient sensitivities and SVs are calculated using the adjoint of the forward tangent propagator of the numerical model, gradient sensitivities and SVs are called *adjoint-based sensitivities.* Most of the studies that have used adjoint-based sensitivities to identify sensitive regions for targeted observations have assumed that the actual error that projects onto the adjoint- based sensitivities contributes to a significant fraction of the forecast error (e.g., Gelaro et al. 1999). Since the structure and evolution of the actual analysis error are not known in real situations, it is not clear how valid this assumption is.

In this paper, the structure and evolution of both analysis error and adjoint-based sensitivities are compared following a typical synoptic event under the perfect model assumption. The results show that the projection of the evolving SV^{1} onto the forecast error increases during the SV's evolution. The evolved SV is thus very similar to the forecast error, which suggests the possibility of using the evolved SV for targeting observations. Unlike the initial SV strategy, which as currently implemented is primarily a dynamics-based adaptive strategy, the evolved SV identifies regions with forecast error, and thus incorporates uncertainty information. Furthermore, since the evolved SV is generated from the initial SV, the evolved SV may be implemented as a joint uncertainty–dynamics strategy. Previous work has shown that the magnitude of the analysis error is crucial in limiting the effectiveness of dynamics-based strategies (especially the initial SV strategy; e.g., Hansen and Smith 2000); since the evolved SV strategy includes information about the possible analysis error, the evolved SV strategy may be more effective, particularly when analysis error is small. In order to test the feasibility of the evolved SV strategy for adaptive observations, the evolved SV strategy is implemented along with several other dynamics-based and uncertainty-based strategies in observation system simulation experiments (OSSEs) for several simulated situations. The average reduction in forecast error produced by each of the strategies is evaluated and compared.

The OSSEs are run using the National Center for Atmospheric Research (NCAR) quasigeostrophic (QG) channel model, under the perfect model assumption. This QG model has been used in various atmospheric dynamics, predictability, and data assimilation studies (e.g., Rotunno and Bao 1996; Hamill et al. 2000; Hamill and Snyder 2000, 2002; Hamill et al. 2002; Morss et al. 2001; Morss and Emanuel 2002; Snyder et al. 2003; Snyder and Hamill 2003). Given that the current approximation of AECM is suboptimal and that the sensitive regions indicated by the AECM SV tend to correspond to those regions indicated by the potential enstrophy SV (i.e., Kim and Morgan 2002; Barkmeijer et al. 1998; Hamill et al. 2003), the potential enstrophy norm is chosen to calculate SVs.^{2} The adjoint-based sensitivities are calculated using the adjoint model developed for the NCAR QG channel model (Kim 2002). The three-dimensional variational (3DVAR) data assimilation system developed for the NCAR QG model (Morss et al. 2001) is used to assimilate observations.

Section 2 describes the NCAR QG channel model, the model's adjoint, and the 3DVAR data assimilation system, and provides an overview of the generation of experimental states and the experimental design. Further detail on the design of the specific experiments in sections 4 and 5 are presented at the beginning of those sections. The mathematical formulations used to calculate the adjoint-based sensitivities and of the relationships between the error and adjoint-based sensitivities are presented in section 3. Section 4 compares the evolution of the analysis error and the adjoint-based sensitivities for a typical synoptic situation. In section 5, the impact of the nonadaptive and adaptive strategies on forecast error is presented. Section 6 contains a summary and discussion.

## 2. Experimental framework

### a. Quasigeostrophic channel model

The model is a zonally periodic, QG gridpoint channel model on a beta plane developed at NCAR. The model variables are PV in the interior and potential temperature at the upper and lower boundaries. The main forcing is a relaxation to a specific zonal mean reference state, the Hoskins–West jet (Hoskins and West 1979). Fourth-order horizontal diffusion is applied to the entire model domain, and Ekman pumping is applied at the lower boundary. There is no orography or seasonal cycle. Stratification is constant and the tropopause is rigid with varying temperature. The specific formulations of the model can be found in Kim (2002). For the calculations to follow, the model was discretized into five levels in a troposphere of 9-km depth (Table 1). The horizontal resolution is 250 km in a domain that is 16 000 km in circumference and 8000 km in width, giving 65 × 33 horizontal grid points. The maximum zonal wind of the Hoskins–West jet is 60 m s^{−1} at the tropopause, and the relaxation time is 20 days. The Brunt–Väisälä frequency is 1.1293 × 10^{−2} s^{−1}, the Coriolis parameter 10^{−4} s^{−1}, and the meridional gradient of the Coriolis parameter 1.6 × 10^{−11} m^{−1} s^{−1}. The horizontal diffusion coefficient is 9.26 × 10^{15} m^{4} s^{−1} and vertical eddy diffusion coefficient is 5 m^{2} s^{−1}.

### b. Adjoint of quasigeostrophic channel model

The adjoint model (𝗠^{T}), where T indicates the transpose, of the forward tangent propagator (𝗠), of the NCAR QG channel model is developed and used to calculate the adjoint-based sensitivities. The adjoint model is developed not only for the linearized version of the nonlinear advection of the interior PV and the boundary potential temperatures, but also for the horizontal diffusion, Ekman pumping, and relaxation, which are linear. The nonlinear basic state is updated at every time step of the tangent linear and adjoint model integration. The detailed explanation of the adjoint coding, tangent linear check, and adjointness check for particular perturbations and basic states can be found in Kim (2002).

### c. Data assimilation system

A 3DVAR data assimilation system is used to assimilate the simulated rawinsonde observations into the model state. The 3DVAR generates the analysis by minimizing a cost function that combines the forecast and observation deviations from a desired analysis, weighted by the inverses of the corresponding background and observation error covariance matrices. The background error covariances are assumed to be constant in time, to be diagonal in spectral space, and to have separable horizontal and vertical structures as in Parrish and Derber (1992). The simulated rawinsonde observations are wind and temperature at all model levels with random observation errors added. The variances of the observation error covariance are taken from Parrish and Derber (1992) and the vertical correlations between observation errors are obtained from Bergman (1979). Further detail on the 3DVAR data assimilation algorithm, simulated rawinsonde observations, and error covariance matrices can be found in Morss (1999).

### d. Generation of experimental states and overview of the experimental design

A case is selected from a set of states generated from nonlinear integrations of the QG model. We identify this arbitrary state as the *true state* for our experiments. During the spinup time, a *model state* is initially generated by modifying the true state with random noise and subsequently assimilating simulated rawinsonde observations every 12 h using the 3DVAR data assimilation scheme. The resulting model states are then used in the experiment. A number of different model states can be realized by varying 1) the spinup time of nonlinear model, 2) the configurations of the observation locations and number of observations, and 3) the observation error to generate the observations assimilated at each assimilation interval. Table 2 shows the overview of different experimental configurations in sections 4 and 5. For the experiment in section 4, 25 model states are generated by varying the fixed observation locations and fixed observation errors. For the adaptive observation experiments in sections 5b and 5c, the spinup time is different from that in section 4 to investigate the validity of each adaptive strategy for independent cases.

Once the truth and model states are generated at the beginning time of each experiment, these states at subsequent times are generated by integrating both states forward for 48 h using the nonlinear QG model. The *analysis* *error* in sections 4 and 5 is calculated by averaging the differences between true state and individual model state in Table 2 at the beginning time of the experiments. The *forecast* *error* in sections 4 and 5 is calculated by averaging the differences between true state and individual model state in Table 2 at subsequent times. A more detailed description of the experimental procedures for each experiment in sections 4 and 5 can be found at the beginning of those sections.

## 3. Mathematical formalism

### a. Gradient sensitivity of forecast error to initial condition

*t*

_{f}is written as

^{3}

**e**(

*t*

_{f})[≃𝗠

**e**(

*t*

_{0})] is the forward integration of the initial analysis error

**e**(

*t*

_{0}) until the time

*t*

_{f}by the nonlinear QG model

*M.*The potential enstrophy norm is defined as (3.5). The analysis error

**e**(

*t*

_{0}) is calculated by

**e**

*t*

_{0}

**x**

^{a}

*t*

_{0}

**x**

^{t}

*t*

_{0}

**x**= {

*q,*

*θ*} is the state vector of the model, which includes both the interior PV (

*q*) and boundary potential temperatures (

*θ*). The

**x**

^{a}and

**x**

^{t}are the analysis and true states, respectively. An approximation to the change in

*J,*

*δJ,*can be obtained from the first term of the Taylor expansion of

*J*[

**x**(

*t*

_{f}) +

*δ*

**x**(

*t*

_{f})] −

*J*[

**x**(

*t*

_{f})] as

*δJ*

_{tf}

*J,*

*δ*

**x**

*t*

_{f}

_{tf}

*J*[=

**e**(

*t*

_{f})] is the gradient of

*J*with respect to the state vector at time

*t*

_{f}and can be obtained by differentiating (3.1). The change in

**x**at time

*t*

_{f},

*δ*

**x**(

*t*

_{f}), is related to the initial perturbation

*δ*

**x**(

*t*

_{0}) through the forward tangent propagator as

*δ*

**x**(

*t*

_{f}) ≃ 𝗠

*δ*

**x**(

*t*

_{0}). Equation (3.2) can be rewritten as

_{t0}

*J*is the gradient of

*J*with respect to the state vector at time

*t*= 0 h.

_{t0}

*J*may be obtained by “backward” integration of the forecast error at time

*t*

_{f}using the adjoint of the forward tangent propagator (𝗠

^{T}):

_{t0}

*J*

^{T}

**e**

*t*

_{f}

### b. Potential enstrophy singular vectors

*t*

_{f}=

*τ*

_{opt}. In this study, we choose the initial and final norms as the potential enstrophy. The potential enstrophy (square of the disturbance PV or variance of PV) is defined similarly to Hakim (2000) and Kim and Morgan (2002) as

*x,*

*y,*

*z*grid points, respectively.

*λ*(the amplification factor),

*t*

_{f}=

*τ*

_{opt}, where

**x**′(

*t*

_{0}) is the initial perturbation. It may be shown that the maximum of this ratio is realized when

**x**′(

*t*

_{0}) is the SV of the forward tangent propagator 𝗠 for the potential enstrophy norm, that is

**x**′(

*t*

_{0}) satisfies

^{T}

**x**

*t*

_{0}

*λ*

**x**

*t*

_{0}

**x**′(

*t*

_{0}) in (3.7).

^{4}is calculated as a weighted linear combination of the individual SVs as

*ρ*is the number of SVs calculated,

*λ*

_{i}(

*λ*

_{1}>

*λ*

_{2}> … >

*λ*

_{ρ}) are the singular values of the right SVs [

**x**′(

*t*

_{0})

_{i}] of the forward tangent propagator 𝗠 at the initial time, and

*λ*

_{1}is the singular value of the leading SV.

### c. Relationship between the error and the adjoint- based sensitivities

**e**(

*t*

_{0}) may be expanded as

*c*

_{i}(=〈

**x**′(

*t*

_{0})

_{i},

**e**(

*t*

_{0})〉) are the projection coefficients of the analysis error onto the

*i*th SV, and

**e**(

*t*

_{f}) ≃ 𝗠

**e**(

*t*

_{0})] and using (3.4), (3.7), and (3.9), the initial time gradient sensitivity and initial SVs may be related by

**x**′(

*t*

_{f})

_{i}are the left SVs of the forward tangent propagator 𝗠 at the final time and

*d*

_{i}(=〈

**x**′(

*t*

_{f})

_{i},

**e**(

*t*

_{f})〉 =

*λ*

_{i}

*c*

_{i}) are the projection coefficients of the forecast error onto the

*i*th SV at the final time.

## 4. Comparison of error and adjoint-based sensitivities

In order to describe and compare the evolution of the error and adjoint-based sensitivities from a synoptic perspective, we focus on a typical synoptic case. A state generated by integrating the QG model for 80 days is selected as the true state at *t* = 0 h. The sequence in Fig. 1 shows a horizontal cross section of the streamfunction of a true state. At the initial time, the streamfunction is characterized by a prominent upper trough near grid point (27, 28)^{5} at the top and a corresponding surface cyclone near grid point (29, 23) (Figs. 1a,d, and g). Figures 1b,e, and h, and 1c,f, and i show the evolved streamfunction at time *t* = 24 h and *t* = 48 h at the top, level 4, and bottom respectively. The upper trough and its corresponding surface cyclone deepen considerably during first 24 h and much less so afterward. After *t* = 48 h, the cyclone weakens (not shown).

Five sets of 32 fixed observation locations (left column of Fig. 2) are selected to assimilate the simulated observations. The minimum observation spacing between fixed observation locations is one grid point and the observation locations are mostly concentrated in the middle of the domain.^{6} For each observation network, five model states are generated by assimilating five sets of observations separately at each assimilation cycle during 80 days of spinup time. The ensemble of observations is generated similar to the perturbed observation ensemble (Hamill et al. 2000; Morss et al. 2001). By this procedure, 25 members of model states are generated at time *t* = 0 h (Table 2). These five sets of observation network and random observation error are used to eliminate the effect of specific observation network and to include the effect of using different realizations of observation errors to generate the model states. The error states are calculated by averaging the differences between true state and individual model state. Similar to the error states, all the other states (i.e., gradient sensitivity, and SVs) in this section are calculated for each of the 25 cases, then averaged.

### a. Evolution of analysis error

From now on we focus on the development of the PV error at the middle level of the domain since the PV errors at other levels show quite similar characteristics to those at level 3. Figure 3 shows the PV error at level 3 at the indicated times. The error is initially characterized by structures with small amplitude in the middle of the domain, largest amplitude near grid point (28, 22), and large amplitude near northern and southern boundaries. Shortly after the initial time, the PV error near grid point (28, 22) begins to grow rapidly and propagate eastward, generating a wave train of large error. In contrast, the large error near the northern and southern boundaries does not grow or propagate during the evolution. The error amplifies by a factor of 1.87 for 48 h for the entire domain (note that the amplification factor is averaged over 25 cases, which are generated by varying not only observation realizations but also varying fixed observation locations).

Given a perfect model, there are two scenarios leading to large forecast errors at the verification time. Those errors which project onto the amplifying SVs of the flow grow very rapidly during the evolution. The other scenario is that those initially large errors that do not project onto the amplifying SVs contribute to large forecast error by remaining large or slowly amplifying. In that sense, the error near grid point (28, 22) corresponds to rapidly growing error, and the error near the northern and southern boundaries corresponds to an initially large error that remains large.

### b. Evolution of gradient sensitivity of forecast error to initial condition

Figure 4 shows the gradient sensitivity of the forecast error to the initial and forecast model states at level 3 of the domain at the indicated times. At *t* = 48 h, the gradient sensitivity is, by construction, exactly the same as the 48-h forecast error in Fig. 3f. Even though the concentrated gradient sensitivity structures in the middle of the domain (Fig. 4f) are distributed over a broader area as the backward adjoint integration proceeds, the error structures with large amplitude in the middle of the domain may be identified with the gradient sensitivity during the first 12-h backward integration (cf. Figs. 3d and 3e with Figs. 4d and 4e). After a 24-h backward integration from the final time, the gradient sensitivity structures with large amplitude (Figs. 4a–c) are out of phase with the rapidly growing error in Figs. 3a–c. The error near the northern and southern boundaries in Fig. 3a is out of phase with the gradient sensitivity and only partially indicated by the gradient sensitivity during the evolution. The characteristics of gradient sensitivity at other levels are similar to those at level 3.

The extrema in the gradient sensitivity of forecast error with respect to the initial condition indicate the regions in which small changes in the initial condition produce the largest changes in the final forecast error. The apparent lack of agreement between the initial error in Fig. 3a and the initial gradient sensitivity in Fig. 4a suggests that only a small fraction of the initial error contributes to the final forecast error.

### c. Evolution of potential enstrophy singular vectors

Figure 5 shows the PV of the *τ*_{opt} = 48 h SV composited from the leading 18 SVs and the corresponding evolved SV at level 3 at the indicated times. Initially the SV resembles the gradient sensitivity and is of large spatial scale. The prominent PV maxima at time *t* = 0 h correspond to the extrema of the gradient sensitivity at the same time (Fig. 5a). The evolved SV at *t* = 12 h is of small scale and localized structure mostly concentrated in the middle of the domain (Fig. 5b). After *t* = 12 h, the evolved SV and the rapidly growing forecast error resemble one another (cf. Figs. 5c–f with Figs. 3c–f). The distributions of PV error and SV PV are similar for other levels of the domain. Figure 6 shows the amplification factors of the first 18 *τ*_{opt} = 48 h SVs. The amplification factor of the leading SV is much larger than that of other SVs.

### d. Similarity of error and adjoint-based sensitivities

**e**(

*t*) is the error at some arbitrary time

*t*from 0 to 48 h and ∇

_{t}

*J*is the gradient sensitivity at the same time, is shown in Fig. 7a. As anticipated from Figs. 3 and 4, the projection decreases rapidly for the first 6 h of backward integration of the adjoint model (Fig. 7a). That the projection of the analysis error onto the gradient sensitivity at

*t*= 0 h is small implies that only a small fraction of the analysis error contributes to a significant fraction of the forecast error at time

*t*= 48 h. In other words, the analysis error is dominated by decaying, neutral, or slowly amplifying components.

Figure 7b shows the normalized projection of the error onto the composite SV during the evolution, computed using an expression similar to (4.1). The projection is initially small, but increases rapidly with time. This evolution confirms that a small portion of the analysis error contributes to a considerable portion of the forecast error at *t* = 48 h.

Figure 7c shows the normalized projection of the gradient sensitivity onto the composite SV during the evolution, computed using an expression similar to (4.1). As anticipated from the Figs. 3 and 4, the projection is large at the initial and final times but small during the intermediate times. The projection of the gradient sensitivity onto the SV is large at the initial time since those parts of the error that project onto the SV are similar to those parts of the error indicated by the gradient sensitivity at the initial time. The projection of the gradient sensitivity onto the composite SV reaches the same magnitude as that of the error onto the composite SV at time *t* = 48 h since the error and the gradient sensitivity are the same at that time (compare Figs. 7a and 7b with 7c). The large projection at initial and final times but small projection at intermediate times is explained by relationships (3.10) and (3.11), which hold at initial and final times, respectively, but not for intermediate times.

An example for other experimental cases using different initial model states, different observation locations, and different random observation errors shows similar results (Kim 2002). The evolution of the normalized projection of the error onto the composite SV (Fig. 7b) suggests the evolved SV *from a prior forecast* as an adaptive observation strategy since most of the forecast error is identified by the evolved SV. Because forecasts typically provide the background fields for data assimilation, to the extent that evolved SVs and forecast error are similar, knowledge of the distribution of the evolved SVs at a particular forecast time would provide information on where one might anticipate uncertainties (possible errors) in the background field to exist. If additional observations are taken in the regions where the background error is anticipated to be large, the assimilation of those observations should lead to a reduction in analysis error. As a consequence, we propose that evolved SVs be used as a tool for targeting observations.

## 5. Impact of adaptive observations on forecast error

As discussed in section 4, while the initial SV has little similarity to the analysis error, the resemblance between the forecast error and the evolved SV grows as the evolution progresses. This has two implications. The first is that the small fraction of error that projects onto the initial SV contributes most of the forecast error; this suggests that the initial SV can be useful as an adaptive observation strategy. The second implication is that the evolved SV identifies most of the forecast error, and thus may also be an effective adaptive observation strategy. The adaptive strategy using the evolved SV can be divided into two categories based on how the evolved SV is calculated. When *τ*_{opt} = *τ*_{evol},^{7} the evolved SV strategy uses only uncertainty information, but when *τ*_{opt} > *τ*_{evol}, the strategy incorporates both dynamics and uncertainty information. Because the initial SV and evolved SV strategies are based on different approaches, OSSEs utilizing both strategies are necessary to evaluate their performance. The results from OSSEs testing the initial SV and evolved SV adaptive strategies are compared with results from a gradient sensitivity adaptive strategy and a nonadaptive strategy. Unlike the gradient sensitivity adaptive strategy, which uses the actual forecast error as a response function, the initial SV and evolved SV adaptive strategies are implemented under realistic information constraints. An adaptive strategy based on the actual forecast error, which can only be implemented in an idealized setting, is also tested to serve as a benchmark of the other strategies' utility.

### a. Experimental design

#### 1) Observation system simulation experiments

Figure 8 shows a schematic of how the adaptive observation strategies were implemented for the OSSEs. The targeting time (*t*_{T}) is the time at which adaptive observations are to be taken. The lead time is the time, prior to *t*_{T}, at which information must be generated so that locations for adaptive observations may be determined; for these experiments, the lead time is set to 36 h, a typical value for realistic targeting experiments (Majumdar et al. 2002). The time at which the forecasts are verified is denoted as the verification time (*t*_{V}) and is set to 48 h after *t*_{T} (*t*_{T} + 48 h). Note that the control run used to calculate all the adaptive strategies begins at the lead time, except for the evolved SV strategies with 12- and 24-h evolving times.

Different cases are selected for adaptive observation experiments from those in section 4 to investigate the usefulness of each strategy for independent cases. Five true states that are 3 days apart from each other are generated by spinning up the QG model for 50, 53, 56, 59, and 62 days. The corresponding model states are generated by the procedures described in section 2d and Table 2. For the experiments in section 5b, a set of 16 and 32 fixed, evenly distributed observation locations are used to generate the background fields at *t*_{T} (Fig. 9). For these experiments, five sets of observations (with different random observation errors) are separately assimilated into the model state at each analysis time during the spinup period before *t*_{T}. For the additional experiments in section 5c, five sets of fixed 16 and 32 observation locations (Fig. 2) are used, but only one set of observations that are different for each set of observation locations. These two different types of experiments are performed to investigate the effect of each configuration. In both sets of experiments, the number of background states, generated by different combinations of fixed observation locations and observation errors prior to *t*_{T}, is 25 (Table 2).

Given the 25 background fields at time *t*_{T}, the simulated rawinsonde observations are deployed in regions indicated by various adaptive strategies. In order to eliminate the effect of specific realizations of adaptive observations, five different random observation errors are used to generate the adaptive observations at *t*_{T}. The resulting 125 analysis fields for each adaptive strategy are then integrated forward to generate the corresponding 125 forecast fields. The mean of the forecast error at *t*_{V} for each adaptive strategy is obtained by averaging the differences between the truth and the 125 forecasts generated using that adaptive strategy. In contrast, the mean of the forecast error at *t*_{V} for the nonadaptive strategy is obtained by integrating the 25 analysis fields and by averaging the differences between the truth and 25 model states.

#### 2) Selection of adaptive observation locations

Table 3 summarizes the nonadaptive and adaptive strategies tested in these experiments. The observation locations for the FIXED strategy are the same as the fixed observation locations used during the spinup time. The ERR strategy selects observation locations where the actual forecast error is large for a 36-h forecast ending at *t*_{T} (Fig. 8a). The GRAD-SENS strategy selects observation locations where the gradient sensitivity of forecast error (at *t*_{V}) to the initial condition (at *t*_{T}) is large (Fig. 8b). In contrast to the ERR and GRAD-SENS strategies, the strategies utilizing SVs (INIT-SV and EVOL-SV strategies) do not incorporate any error information into calculation. The INIT-SV and EVOL-SV strategies are calculated based on each trajectory of the 25 forecasts. Unlike the INIT-SV strategy (Fig. 8c), the EVOL-SV strategy is tested for varying optimization and evolving times. The EVOL-SV strategy with *τ*_{opt} > *τ*_{evol} is referred to as the EVOL-SV-*t*_{V} strategy. The EVOL-SV strategy with *τ*_{opt} = *τ*_{evol} is referred to as the EVOL-SV-*t*_{T} strategy. The optimization times, evolving times, and corresponding trajectories for the EVOL-SV strategies are shown in Figs. 8d and 8e. For brevity, only the EVOL-SV strategies for 36-h evolution are shown in the schematic. The evolving times tested for each EVOL-SV strategy are 36, 24, and 12 h. Note that the EVOL-SV strategies with *τ*_{evol} = 24 h (or 12 h) benefit from more observation information than other strategies.

For each strategy, the first adaptive observation location is selected by identifying that grid point having maximum amplitude of the variable (e.g., error, gradient sensitivity, initial SV, and evolved SV) associated with each strategy. The next observation site is selected by finding that grid point that has the next highest amplitude, but is no closer than a prescribed number of grid points to the first observation location. This procedure is repeated until all adaptive observation locations have been identified. For each adaptive observation location, a simulated radiosonde observation is taken at all levels for that particular grid point. Two observation spacings are tested: three grid points (section 5b) and one grid point (section 5c) away from previously selected stations.

#### 3) Calculation of composite SV for INIT-SV and EVOL-SV strategies

The coefficients used to calculate the composite SV in (3.8) are similar to those used in adaptive observation experiments for real cyclone development cases (e.g., Buizza and Montani 1999; Montani et al. 1999). Even though the coefficients in (3.8) are suboptimal compared to those in (3.9), Eq. (3.8) is used instead of (3.9) since the projections of the initial error onto the individual SVs are not known in practical situations. Figure 10 shows the normalized projection coefficients with or without scaling by the maximum projection coefficient in (3.9) and the coefficients used to composite SVs in (3.8). The coefficients are calculated by averaging over 25 cases. Both the actual projection coefficients and the coefficients used in this paper have maximum weighting toward the leading SV, implying that the coefficients we have used are a good approximation of the actual projection coefficients.

### b. Results of adaptive observations

All the adaptive strategies described in Fig. 8 are tested for each of the cases described in section 5a, and the case-averaged rms analysis and forecast errors are compared with those produced by the nonadaptive observation strategy. To avoid clustering the observations, the observation locations are selected subject to a minimum spacing of three grid points (Table 2). In order to investigate the effect of observation density for the nonadaptive and adaptive observations, three configurations of observation stations are tested: 16 fixed (adaptive), 32 fixed (adaptive), and 16 fixed (16 adaptive added to 16 fixed) observations.

#### 1) Sparse observation network

Figure 11a shows the rms forecast errors associated with 16 fixed (adaptive) observation locations at the indicated times for the nonadaptive and four adaptive strategies, averaged over all cases tested. The forecast errors of the various adaptive strategies are less than that of the FIXED strategy. Of the strategies tested, the ERR strategy reduces forecast error the most. The forecast errors produced by the EVOL-SV strategies are smaller than those produced by the INIT-SV and GRAD-SENS strategies and are close to those of the ERR strategy. Recall from section 4 that while the EVOL-SV strategy identifies regions where there may be significant background error at *t*_{T}, the INIT-SV and GRAD-SENS strategies have a relatively small projection onto the analysis error. The EVOL-SV strategies most likely perform better than the INIT-SV and GRAD- SENS strategies because the large fraction of the error that is identified by the evolved SV is easier to correct than the small fraction of the error that is identified by the initial SV. Even though the skill of the EVOL-SV strategies does not clearly depend on *τ*_{evol}, the EVOL- SV-*t*_{V} strategies do tend to reduce forecast error more than the EVOL-SV-*t*_{T} strategies, implying the benefit of using both dynamics and uncertainty information for adaptive observations.

#### 2) Dense observation network

Figure 11b shows the case-averaged rms analysis and forecast errors for 32 fixed or adaptive observation locations. Compared to the results for sparse observation networks shown in Fig. 11a, the rms forecast errors of all the strategies are much smaller. These smaller rms forecast errors are caused by the fact that the observation locations used to generate the background fields at *t*_{T} are 32 rather than 16 for this case. In contrast to the sparse observation case, the ERR strategy does not reduce error significantly more than the FIXED strategy. This result agrees with the results in Morss et al. (2001) and Morss and Emanuel (2002), which demonstrate that adapting observations is, in general, more beneficial for a sparse observation network than a dense observation network because it is more difficult to correct small analysis errors than large errors. The INIT-SV and GRAD-SENS strategies produce forecast errors significantly larger than the FIXED strategy. The evolved SV strategies, on the other hand, produce forecast errors similar to or smaller than that of the FIXED and ERR strategies. As in the sparse observation experiments, there is some indication that the EVOL-SV-*t*_{V} strategies reduce forecast error slightly more than the EVOL-SV- *t*_{T} strategies.

#### 3) Mixed observation network

Figure 11c shows the case-averaged rms forecast errors for a network of 16 fixed observations and a network of 16 adaptive observations added to 16 fixed observations (total of 32 observation locations). In contrast to the results shown for sparse and dense observation networks, which investigate the benefit of adaptive strategies when one is free to move all observations at *t*_{T}, this set of experiments investigates the effect of adding adaptive observations to a network of fixed observations. The added adaptive observation locations are selected subject to the constraint that they are at least three grid points away from all other observations, fixed or adaptive. As expected given the larger number of observations, adding adaptive observations to the fixed observation network tends to reduce the forecast error. In other words, the effect of 16 additional observations at one time is the difference between the forecast errors in Fig. 11a and those in Fig. 11c. The relative performance of adaptive strategies is very similar to the results in Fig. 11a. Of the adaptive strategies tested, the ERR and EVOL-SV strategies reduce forecast error more than the INIT-SV and GRAD-SENS strategies. Similar to the experiments for sparse and dense observation networks, the EVOL-SV-*t*_{V} strategies produce slightly more skillful forecasts than the EVOL-SV-*t*_{T} strategies, and the skill of the EVOL-SV strategies does not clearly depend on the evolving times.

### c. Additional experiments with different configurations of observation locations and spacing

The results of OSSEs using five sets of 32 and 16 observation locations (Fig. 2) to generate the 25 analysis states at *t*_{T} are shown in Fig. 12. In contrast to the configuration in section 5b, the minimum observation spacing between fixed observation locations is one grid point and the observation locations are mostly concentrated in the middle of the domain to mimic the distribution of observation locations in practical situations (Table 2). Given the background error generated from these fixed observation locations, the effect of adaptive observations based on various strategies are tested. Several sets of fixed observation locations are tested to eliminate the effects of the specific configuration of fixed observation locations on the results.

Figure 12 shows the case-averaged rms forecast errors of the various strategies for various observing network configurations at the indicated times. The minimum observation spacing constraint used for the various adaptive strategies is one grid point—the same as for the fixed observations. Compared to the results in Fig. 11, the differences between the rms forecast error of the INIT-SV strategy and that of the EVOL-SV strategy are smaller. These smaller differences are likely due primarily to the smaller observation spacing. Except for that difference, the results of this experiment are quite similar to those in section 5b, implying that the results in section 5b are not only independent of specific synoptic situations and specific observation errors utilized but also independent of specific configurations of fixed observation locations and observation spacing.

## 6. Summary and discussion

Adjoint-based sensitivities have been used to identify sensitive regions in which additional observations, if properly assimilated, may improve a subsequent forecast. The use of adjoint-based sensitivities to identify sensitive regions for targeting purposes has been based on the assumption that correcting that portion of the initial error that projects onto the adjoint-based sensitivities will lead to a significant reduction in the forecast error. In this paper, the structure and evolution of the initial error and the adjoint-based sensitivities have been compared to examine the above assumption.

The results in section 4 indicate that the small fraction of the analysis error that projects onto the gradient sensitivities and the SVs grows to have a large contribution to the forecast error at 48 h. As a consequence, these results suggest that, indeed, correcting a small portion of the analysis error in the regions indicated by the initial SV would lead to an improved forecast. Further, the similarity of the evolved SV to the forecast error suggests the possibility of using evolved SVs for targeting purposes. In contrast to using initial SVs for identifying regions where error, if it were to exist, might grow rapidly, evolved SVs appear to indicate where significant forecast error may exist. Since forecasts are used as the background fields for data assimilation, correcting of (possibly) large forecast error, as indicated by the evolved SVs, would improve the analysis produced by data assimilation schemes, and thereby yield important reductions to errors in subsequent forecasts.

OSSEs are performed to assess the utility of various adaptive strategies, including the evolved SV strategy and the results are compared with those of the nonadaptive observation strategy. For sparse observations, the forecast errors of the evolved SV adaptive strategies are generally less than those of the fixed observation strategy and the adjoint-based strategies, but only slightly larger than those of the errors associated with the error (ERR) strategy. That the forecast errors associated with the evolved SV strategy are close in magnitude to those associated with the ERR strategy confirms that the evolved SVs correspond well to the forecast error. For the dense observation network, the forecast errors of the adaptive strategies are comparable to or larger than those of the fixed observations, in agreement with the results in Morss et al. (2001) and Morss and Emanuel (2002) which demonstrate that adaptive observations are more beneficial for a sparse observation network than a dense observation network. Even for a dense observation network, the evolved SV strategies show much better skill than the adjoint-based strategies. For the mixed network (adaptive observations added to fixed observations), all the adaptive strategies show some skill, but the benefit of additional observations is relatively small. Using a simple framework, Baker and Daley (2000) also demonstrate the difficulty of reducing forecast error by adding additional observations to the existing observations. For the mixed network, the evolved SV strategies also produce forecasts with skill similar to or better than those produced by the adjoint- based strategies.

Overall, the adaptive strategy based on the evolved SV performs as well as or better than the adjoint-based adaptive strategies. The most distinct difference between the adjoint-based and the evolved SV strategies occurs when the number of observation stations is large, a situation in which previous results have suggested that forecast error reduction by adaptive observations is generally very difficult. All of the results of the OSSEs are similar for different synoptic situations, different configurations of observation locations, different realizations of observation errors, and a different observation spacing.

Theoretical aspects of using the evolved SVs for adaptive observations may be supported by the fact that the evolved SVs, constrained by the AECM at the initial time, can be used to construct the eigenvectors of the forecast error covariance matrix at the end of the optimization interval (e.g., Ehrendorfer and Tribbia 1997). These properly normed evolved SVs, therefore, describe key elements of the forecast error at a later time. Evolved SVs based on the energy norm have been used for ensemble prediction (Barkmeijer et al. 1999), but evolved SVs based on AECM have not been used for adaptive observations or ensemble prediction. Even though the AECM is the proper norm to calculate SVs for this study, the potential enstrophy SV is used because of the relative independence of the evolved SV structure from the norm chosen (e.g., Palmer et al. 1998). Further investigation of the use of properly normed evolved SVs for adaptive observation and data assimilation purposes remains as future work. Because bred vectors (Kalnay and Toth 1994) also contain analysis error information, a comparison between bred vectors and evolved SVs might be another task.

## Acknowledgments

The authors wish to thank Dr. Chris Snyder for valuable discussions on the quasigeostrophic model and adjoint coding, and Dr. Carolyn Reynolds for insightful discussion and thoughtful comments on an earlier version of the manuscript. The authors also acknowledge Dr. Eugenia Kalnay for useful discussions at an earlier stage of the work. Some of the results were generated using computing resources at the National Center for Atmospheric Research, and the authors especially appreciate Pat Waukau allowing the use of those resources. The authors are very grateful to Dr. James Hansen, Dr. Thomas Hamill, Linda Keller, and an anonymous reviewer for their valuable comments that helped improve this manuscript. This work was supported by National Science Foundation Grants ATM- 9810916 and ATM-0121186 and “A Study on the Global Ocean/Climate Variability and Predictability with Array for Real-Time Geostrophic Oceanography (ARGO) Program” at the Meteorological Research Institute of the Korea Meteorological Administration.

## REFERENCES

Baker, N. L., and R. Daley, 2000: Observation and background adjoint sensitivity in the adaptive observation-targeting problem.

,*Quart. J. Roy. Meteor. Soc***126****,**1431–1454.Barkmeijer, J., M. van Gijzen, and F. Bouttier, 1998: Singular vectors and estimates of the analysis error covariance metric.

,*Quart. J. Roy. Meteor. Soc***124****,**1695–1713.Barkmeijer, J., R. Buizza, and T. N. Palmer, 1999: 3D-Var hessian singular vectors and their potential use in the ECMWF ensemble prediction system.

,*Quart. J. Roy. Meteor. Soc***125****,**2333–2351.Bergman, K. H., 1979: Multivariate analysis of temperatures and winds using optimum interpolation.

,*Mon. Wea. Rev***107****,**1423–1444.Bergot, T., 1999: Adaptive observations during FASTEX: A systematic survey of upstream flights.

,*Quart. J. Roy. Meteor. Soc***125****,**3271–3298.Bergot, T., G. Hello, A. Joly, and S. Malardel, 1999: Adaptive observations: A feasibility study.

,*Mon. Wea. Rev***127****,**743–765.Berliner, L. M., Z. Q. Lu, and C. Snyder, 1999: Statistical design for adaptive weather observations.

,*J. Atmos. Sci***56****,**2536–2552.Bishop, C. H., and Z. Toth, 1999: Ensemble transformation and adaptive observations.

,*J. Atmos. Sci***56****,**1748–1765.Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects.

,*Mon. Wea. Rev***129****,**420–436.Buizza, R., and T. Palmer, 1995: The singular vector structure of the atmospheric global circulation.

,*J. Atmos. Sci***52****,**1434–1456.Buizza, R., and A. Montani, 1999: Targeting observations using singular vectors.

,*J. Atmos. Sci***56****,**2965–2985.Ehrendorfer, M., and J. J. Tribbia, 1997: Optimal prediction of forecast error covariances through singular vectors.

, .*J. Atmos. Sci***54****,**286–313.Errico, R. M., 1997: What is an adjoint model?

,*Bull. Amer. Meteor. Soc***78****,**2577–2591.Gelaro, R., R. H. Langland, G. D. Rohaly, and T. E. Rosmond, 1999: An assessment of the singular-vector approach to targeted observing using the FASTEX dataset.

, .*Quart. J. Roy. Meteor. Soc***125****,**3299–3327.Hakim, G. J., 2000: Climatology of coherent structures on the extratropical tropopause.

,*Mon. Wea. Rev***128****,**385–406.Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter– 3D-variational analysis scheme.

,*Mon. Wea. Rev***128****,**2905–2919.Hamill, T. M., and C. Snyder, 2002: Using improved background-error covariances from an ensemble Kalman filter for adaptive observations.

,*Mon. Wea. Rev***130****,**1552–1572.Hamill, T. M., C. Snyder, and R. E. Morss, 2000: A comparison of probabilistic forecasts from bred, singular vector, and perturbed observation ensembles.

,*Mon. Wea. Rev***128****,**1835–1851.Hamill, T. M., C. Snyder, and R. E. Morss, 2002: Analysis-error statistics of a quasigeostrophic model using three-dimensional variational assimilation.

,*Mon. Wea. Rev***130****,**2777–2791.Hamill, T. M., C. Snyder, and J. S. Whitaker, 2003: Ensemble forecasts and the properties of flow-dependent analysis-error covariance singular vectors.

,*Mon. Wea. Rev***131****,**1741–1758.Hansen, J. A., and L. A. Smith, 2000: The role of operational constraints in selecting supplementary observations.

, .*J. Atmos. Sci***57****,**2859–2871.Hartmann, D. L., R. Buizza, and T. N. Palmer, 1995: Singular vectors: The effect of spatial scale on linear growth of disturbances.

,*J. Atmos. Sci***52****,**3885–3894.Hoskins, B. J., and N. V. West, 1979: Baroclinic waves and frontogenesis. Part II: Uniform potential vorticity jet flows—cold and warm fronts.

,*J. Atmos. Sci***36****,**1663–1680.Hoskins, B. J., R. Buizza, and J. Badger, 2000: The nature of singular vector growth and structure.

,*Quart. J. Roy. Meteor. Soc***126****,**1565–1580.Ide, K., P. Courtier, M. Ghil, and A. C. Lorenc, 1997: Unified notation for data assimilation: Operational, sequential, and variational.

,*J. Meteor. Soc. Japan***75****,**181–189.Kalnay, E., and Z. Toth, 1994: Removing growing errors in the analysis cycle. Preprints,

*10th Conf. on Numerical Weather Prediction,*New York, NY, Amer. Meteor. Soc., 212–215.Kim, H. M., 2002: Diagnosis of error growth and propagation: Implications for adaptive observations. Ph.D. thesis, University of Wisconsin—Madison, 197 pp. [Available from Memorial Library, 728 State St., Madison, WI 53706.&rsqb.

Kim, H. M., and M. Morgan, 2002: Dependence of singular vector structure and evolution on the choice of norm.

,*J. Atmos. Sci***59****,**3099–3116.Langland, R. H., R. Gelaro, G. D. Rohaly, and M. A. Shapiro, 1999: Targeted observations in FASTEX: Adjoint based targeting procedures and data impact experiments in IOP17 and IOP18.

,*Quart. J. Roy. Meteor. Soc***125****,**3241–3270.Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary observations: Simulation with a small model.

,*J. Atmos. Sci***55****,**399–414.Majumdar, S. J., C. H. Bishop, R. Buizza, and R. Gelaro, 2002: A comparison of ensemble-transform Kalman-filter targeting guidance with ECMWF and NRL total-energy singular-vector guidance.

,*Quart. J. Roy. Meteor. Soc***128****,**2527–2550.Montani, A., A. J. Thorpe, R. Buizza, and P. Unden, 1999: Forecast skill of the ECMWF model using targeted observations during FASTEX.

,*Quart. J. Roy. Meteor. Soc***125****,**3219–3240.Morgan, M., 2001: A potential vorticity and wave activity diagnosis of optimal perturbation evolution.

,*J. Atmos. Sci***58****,**2518–2544.Morss, R. E., 1999: Adaptive observations: Idealized sampling strategies for improving numerical weather prediction. Ph.D. thesis, Massachusetts Institute of Technology, 225 pp.

Morss, R. E., and K. A. Emanuel, 2002: Influence of added observations on analysis and forecast errors: Results from idealized systems.

,*Quart. J. Roy. Meteor. Soc***128****,**285–322.Morss, R. E., K. A. Emanuel, and C. Snyder, 2001: Idealized adaptive observation strategies for improving numerical weather prediction.

, .*J. Atmos. Sci***58****,**210–232.Mukougawa, H., and T. Ikeda, 1994: Optimal excitation of baroclinic waves in the Eady model.

,*J. Meteor. Soc. Japan***72****,**499–513.Palmer, T. N., R. Gelaro, J. Barkmeijer, and R. Buizza, 1998: Singular vectors, metrics, and adaptive observations.

,*J. Atmos. Sci***55****,**633–653.Parrish, D. F., and J. Derber, 1992: The National Meteorological Center's spectral statistical interpolation analysis system.

,*Mon. Wea. Rev***120****,**1747–1763.Pu, Z-X., and E. Kalnay, 1999: Targeting observations with the quasi- inverse linear and adjoint NCEP global models: Performance during FASTEX.

,*Quart. J. Roy. Meteor. Soc***125****,**3329–3337.Rotunno, R., and J. W. Bao, 1996: A case study of cyclogenesis using a model hierarchy.

,*Mon. Wea. Rev***124****,**1051–1066.Snyder, C., 1996: Summary of an informal workshop on adaptive observations and FASTEX.

,*Bull. Amer. Meteor. Soc***77****,**953–961.Snyder, C., and T. M. Hamill, 2003: Leading Lyapunov vectors of a turbulent baroclinic jet in a quasigeostrophic model.

, .*J. Atmos. Sci***60****,**683–688.Snyder, C., T. M. Hamill, and S. B. Trier, 2003: Linear evolution of error covariances in a quasigeostrophic model.

,*Mon. Wea. Rev***131****,**189–205.

Approximate pressure (hPa) corresponding to each model level. Level 0 corresponds to the bottom, and level 6 corresponds to the top of the domain

Setup for the experiments in sections 4 and 5. The spinup time for experiments in sections 4 and 5 are different to examine the validity of each adaptive strategy for independent cases. The observation spacing and observation location configurations in sections 5b and 5c are different to examine the effect of different configurations for adaptive observations. Note that the adaptive observations are not assimilated for the experiment in section 4, and that *t _{T}* is the same as

*t*= 0 h

Nonadaptive and adaptive strategies tested and their abbreviations

^{1}

Strictly speaking, SVs are used only for the initial structures that satisfy the definition in section 3b. For brevity, however, in this paper, those disturbances that arise from an initial SV will be referred to as the evolved SVs.

^{2}

For brevity, in this paper all references to SV without specification, refers to the potential enstrophy SV.

^{3}

Notation generally follows the conventions in Ide et al. (1997).

^{4}

For brevity, in this paper the disturbance that is composed from SVs will be referred to as the composite SV.

^{5}

The position of the variable is denoted as “grid point (*x,* *y*).” The *x* corresponds to the grid point in zonal direction, and *y* corresponds to the grid point in meridional direction.

^{6}

We have used this minimum observation spacing since all the fixed observation locations are not evenly distributed in practical situations. The observation stations are chosen to be concentrated in the middle of the domain to mimic the configuration of real observation stations that are largely concentrated in midlatitudes of the Northern Hemisphere.

^{7}

Here, *τ*_{evol} is the length of time the initial SV has evolved in a nonlinear integration of QG model.