Abstract

Model error is the component of the forecast error that is due to the difference between the dynamics of the atmosphere and the dynamics of the numerical prediction model. The systematic, slowly varying part of the model error is called model bias. This paper evaluates three different ensemble-based strategies to account for the surface pressure model bias in the analysis scheme. These strategies are based on modifying the observation operator for the surface pressure observations by the addition of a bias-correction term. One estimates the correction term adaptively, while another uses the hydrostatic balance equation to obtain the correction term. The third strategy combines an adaptively estimated correction term and the hydrostatic-balance-based correction term. Numerical experiments are carried out in an idealized setting, where the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model is integrated at resolution T62L28 to simulate the evolution of the atmosphere and the T30L7 resolution Simplified Parameterization Primitive Equation Dynamics (SPEEDY) model is used for data assimilation. The results suggest that the adaptive bias-correction term is effective in correcting the bias in the data-rich regions, while the hydrostatic-balance-based approach is effective in data-sparse regions. The adaptive bias-correction approach also has the benefit that it leads to a significant improvement of the temperature and wind analysis at the higher model levels. The best results are obtained when the two bias-correction approaches are combined.

1. Introduction

The difference between the dynamics of a numerical weather prediction model and the dynamics of the real atmosphere contributes to the error in numerical forecasts. When the model is employed to provide the background for an analysis scheme, forecast errors often lead to a slowly evolving systematic error component in the background. This type of error, which is called model bias, violates the assumption of the analysis schemes that the mean of the probability distribution of the background error is zero (e.g., Dee and Da Silva 1998).

The contribution of model bias to the discrepancy between a background forecast and the true atmospheric state can be comparable to, or even larger than, the contribution of the growing part of the initial condition error. Model bias has many sources, such as the finite-resolution representation of the continuous atmospheric fields, limited knowledge, and imperfect representation of the subgrid physical processes, and imperfect specification of the boundary conditions. Although some of these sources are completely independent, it is not feasible to identify and parameterize each of them independently. One way to account for model bias, first suggested by Derber (1989), is to assume that the total effect of all sources of the bias in the forecast model can be represented by a limited number of bulk error terms. The amplitude of the bulk error terms is specified by parameters, which are then estimated as part of the data assimilation process.

The general problem of model bias estimation in Kalman filtering was first studied by Friedland (1969), who suggested a scheme in which the model state was augmented by a bias component. In Friedland’s work, the dynamics was taken to be linear and the bias was decoupled and estimated separately from the model state. This decoupled bias estimation approach was introduced into the atmospheric data assimilation literature by Dee and Da Silva (1998), and has since been applied with some success (Dee and Todling 2000; Carton et al. 2000; Martin et al. 2002; Bell et al. 2004; Lamarque et al. 2004; Chepurin et al. 2005; Keppenne et al. 2005; Drécourt et al. 2006). More recently, Baek et al. (2006) and Zupanski and Zupanski (2006) suggested incorporating the method of state augmentation into the formulation of ensemble-based Kalman filter data assimilations schemes.

Baek et al. (2006) have shown that the traditional approach to bias correction (hereafter referred to as bias model I), in which the background is first corrected with the estimated bias and then the state is estimated by updating the bias-corrected background based on the latest observations, can be inefficient in improving the accuracy of the state estimate. To address this problem, we also proposed a new approach (Baek et al. 2006) called bias model II. This approach is motivated by envisioning a situation in which the forecast model evolution takes place on an attractor shifted from the attractor of the true dynamics. In such a situation, making a correction to the background state, which moves the background state estimate from the model attractor to the true system attractor, as done in bias model I, may trigger an adjustment process during the next model integration step. The effects of such an adjustment on the accuracy of the state estimate are unpredictable and often negative. To avoid triggering a strong adjustment process, in bias model II we search for a state estimate that best represents the true state on the model attractor. This involves finding the parameters of a transformation between the model attractor and the true attractor. [We note that other approaches based on assuming a mismatch between the model attractor and the true attractor were also proposed recently by Drécourt et al. (2006) and Toth and Peña (2007)].

In Baek et al. (2006) we tested both bias models I and II via numerical experimentation on the low-dimensional Lorenz-96 model (Lorenz and Emanuel 1998). In the present paper, we use a more realistic setting to investigate the potential benefits of accounting for the bias in the surface pressure state variable with bias model II. To simulate the situation faced in numerical weather prediction, we use two forecast models at different resolutions and with different levels of sophistication in the physical parameterization packages: the simulated “true” atmospheric states are generated by integrating the 2004 version of the model component of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) at resolution T62L28 (the global atmosphere is represented by 144 × 73 × 28 grid points), while the analyses and forecasts are obtained with the lower resolution Simplified Parameterization Primitive equation Dynamics (SPEEDY) model at resolution T30L7 (the global atmosphere is represented by 96 × 48 × 7 grid points). The physical parameterization schemes and the surface boundary condition in the SPEEDY model are strongly simplified compared to those in the NCEP GFS model. Since the vertical coordinate of the SPEEDY model is sigma, defined by the ratio between pressure and the surface pressure, correcting the bias in the surface pressure affects the assimilation of all other variables. Also, because the systematic difference between the surface pressure in the SPEEDY and NCEP GFS models is mainly a result of the difference between the orography of the two models, it is especially appropriate to use bias model II: it would make no sense to correct the background surface pressure bias in the SPEEDY model to match the surface pressure of the NCEP GFS model, which is associated with a higher-resolution orography.

For data assimilation we use the local ensemble transform Kalman filter (LETKF) of Hunt et al. (2007). The LETKF is the latest, computationally most efficient, version of the local ensemble Kalman filter (LEKF) scheme first proposed in Ott et al. (2004). The most important feature of this scheme is that it obtains the analysis independently for each grid point, assimilating all observations that can influence the analysis at a given grid point simultaneously. This allows for a computationally efficient parallel implementation. Although our experiments employ the LETKF, we believe that the results we obtain regarding the effects of accounting for the surface pressure model bias would hold for other ensemble-based data assimilation schemes (e.g., for those proposed by Burgers et al. 1998; Houtekamer and Mitchell 1998, 2001; Anderson 2001; Bishop et al. 2001; Hamill et al. 2001; Whitaker and Hamill 2002; Zupanski 2005).

The structure of the paper is as follows: We first present the bias model and the data assimilation scheme in section 2. Section 3 describes the experiment design, while section 4 presents the results of our numerical experiments. Our conclusions are summarized in section 5.

2. Kalman filtering in the presence of model bias

In this section, we introduce two different definitions of the model error [Eqs. (7) and (11)]. These two definitions together with the assumption that the evolution of the model error is persistent lead to two mathematical models for the model bias. Here, we consider only one of these two bias models: bias model II. Finally, we provide a detailed description of the changes we make to the LETKF algorithm of Hunt et al. (2007) to account for the surface pressure model bias with bias model II.

a. Model error

The atmospheric state at a given time tn is represented by a spatially continuous vector field u(r, tn) (e.g., the components of u might consist of the components of the wind vector, temperature, pressure, relative humidity, etc.). Here, r is a three-dimensional vector of position (the typical components of r are longitude, latitude, and altitude). For brevity, we introduce the notation un for u(r, tn). With this notation, the evolution of the atmospheric state between times tn−1 and tn is

 
formula

where ℱ is the evolution operator. A forecast model f mimics ℱ in a finite-dimensional model space to predict the future atmospheric state. We assume that a suitable projection 𝒫 from the infinite-dimensional atmospheric states to the finite-dimensional model states exists. We denote the finite-dimensional representation of the atmospheric state un by 𝒫(un). Since, in practice, the atmospheric state un−1 is not known, a forecast xn of un is obtained by integrating the model from an estimate xn−1 of un−1:

 
formula

The goal of data assimilation is to provide an estimate (analysis) xna of 𝒫(un) using the model f and the observations collected up to time tn. In what follows, we treat the model as a “black box,” that is, we consider the case where we have to infer information about the error in f from the knowledge of xn−1, xn and the observations. In a Kalman filter data assimilation scheme, the state estimate is updated sequentially by

 
formula
 
formula

The background xnb represents our best estimate of 𝒫(un) at time tn prior to the assimilation of the observations yno and 𝗞 is the Kalman gain matrix. The observation operator hn maps the model representation of the state to observables at analysis time tn and is assumed to satisfy

 
formula

where the vector of Gaussian random variables ɛn, with mean 0 and covariance matrix 𝗥n, represents the observation noise. In practice, hn interpolates the model gridpoint variables to the location and time of the observations and converts the model variables to the observed quantities. In other words, when formulating hn, the goal is to assume that hn[𝒫(un)] would differ from the observed quantity at the observation locations only by the noise ɛn. The use of the interpolation operator hn in (5) is based on the implicit assumption that xn is an accurate estimate of 𝒫(un). That is, the formulation of the Kalman filter we described so far is based on the assumption that the contribution of model error to the error in the background is zero. In practice, however, the atmospheric evolution operator and the model evolution operator differ from each other. This difference would lead to a systematic departure,

 
formula

of the forecast from the atmospheric state, even if the model was integrated from the 𝒫(un−1) finite-dimensional image of the true state (which is unknown in practice).

We obtain the equation that motivates our error models by rearranging Eq. (6),

 
formula

Our goal is to modify Eqs. (3)(5), which assume that bn is the null vector, so as to account for a nonzero bn in the state estimation process.

b. Bias model I

The derivation of bias model I is based on the assumption that the model is subject to the same model error, regardless of whether it is started from the unknown 𝒫(un−1) or from its best available estimate . Under this assumption, Eq. (7) implies that in the presence of model error bn, the best available estimate xnb of 𝒫(un) before the assimilation of the observations yno at tn is

 
formula

Thus the analysis can be obtained by replacing the forecast equation of the Kalman filter [Eq. (3)] by Eq. (8).

Equation (8) offers a practical way to account for the model error, only if one can find an efficient approach to estimate bn. Our preferred approach is the method of state augmentation, which is based on the assumption that the evolution of the model error can be accurately modeled by a deterministic evolution equation:

 
formula

In this case, an estimate bna of bn can be obtained by applying the Kalman filter equations [Eqs. (3)(4)] to the augmented state vector z = [x, b]T. The simplest case arises when we can assume that bn is a time-independent systematic error (bias), since in that case the evolution equation for the model error is

 
formula

c. Bias model II

Bias model II is designed to cope with the situation where the model error is due to differences between the attractor of the atmosphere and the attractor of the model dynamics. It is envisioned that 𝒫(un) lies on the projection into model space of the attractor of the true atmospheric dynamics, and that this projection differs from the attractor of the model. Furthermore, as a working hypothesis, it is further envisioned that there is a one-to-one transformation between orbits on the model space projection of the true attractor to orbits on the attractor of the model. In practice, for any point 𝒫(un) on the image of the true attractor there is a corresponding point 𝒫̂un) on the attractor of the model, and these evolve in time in accord with the presumed one-to-one transformation. In this error model scenario, our goal is to obtain an estimate of the transformation between 𝒫(un) and 𝒫̂un). This goal is motivated by the concern that starting the model from a state not on the model attractor (as one would seek to do if bias model I were applicable), no matter how close that state is to the true state, would trigger an unpredictable, and very likely negative, transient effect on the model forecast as the model orbit relaxes to its attractor. Thus, we replace the equation that defines the model error [Eq. (7)] with

 
formula

That is, we assume that the transformation between the true attractor image and the model attractor is a translation by the amount bn.

Assume that we have a good estimate of 𝒫̂un−1). Then, is an estimate of 𝒫̂un) on the model attractor, while the model error bn is a shift from the model attractor to the attractor of the atmosphere. Since our goal is to keep the state estimate on (or near) the model attractor, our choice for the forecast equation of the Kalman filter is the original Eq. (3) instead of Eq. (8) of bias model I.

One important consequence of choosing Eq. (3) as the forecast equation is that xnb is no longer our best estimate of 𝒫(un) before the assimilation of the observation. Our choice of defining hn by an interpolation scheme, however, is based on the assumption that hn operates on an estimate of 𝒫(un), which is xnbbn according to Eq. (7). Thus, in order to maintain the equality that defines hn [Eq. (5)], we implement the observation operator as

 
formula

where ĥn is the function associated with the interpolation scheme. (With this notation, the observation operator used in the original Kalman filter and in bias model I is defined by hn = ĥn.) In summary, although the background is not corrected for the estimated bias in bias model II, the observation is compared to the bias corrected model state in the analysis update equation. Similar to bias model I, an estimate of the model bias vector bn can be obtained by the method of state augmentation. We note that the best estimate xnabna of the true state 𝒫(un) is never computed in the data assimilation process when bias model II is used, but it can be easily obtained if needed (e.g., for reporting forecasts), as the data assimilation computes both xna and bna.

Estimation of the model error significantly increases the computational cost of the data assimilation algorithm. Even in the simplest case, when we consider only the estimation of a location dependent model bias, as we do in this paper, the dimension of the augmented state vector z is twice as large as the dimension of the state vector x. For bias model II, however, the dimension of the state vector can be reduced by estimating the error term in observation space instead of model grid space: since in a typical data assimilation problem the number of observations is at least an order of magnitude smaller than the number of model variables,1 this strategy can reduce the number of components of the model error vector by at least an order of magnitude. Formally, this strategy involves replacing the definition of h by Eq. (12) with

 
formula

When the interpolation operator ĥn is linear, cn = ĥn(bn) and we should obtain the same analyses using either Eq. (12) or (13). The most important limitation of Eq. (13) is that it can be used only for a fixed observing network, otherwise the components of cn cannot be defined consistently at the different analysis times. Another limitation is that because the correction is done in observation space, the information provided by xna and the analysis of the bias term cna is insufficient to obtain an estimate of the true state 𝒫(un). On the other hand, when ĥn is nonlinear, using Eq. (13) has the added advantage that, as we later illustrate for the surface pressure, it allows for accounting for the bias in selected components of the state vector instead of the bias in the full state vector. Estimating the bias only in selected components further reduces the number of bias parameters leading to an even bigger reduction of the computational cost.

d. Surface pressure bias

The surface pressure component of the background is typically subject to large bias, primarily due to the finite-resolution representation of the orography in the model. To be more precise, the surface pressure bias is expected to grow exponentially with the difference, Δz, between the model representation of the orography and the real orography:

 
formula

Here ps is the atmospheric surface pressure, psmodel is the surface pressure in the model, g = 9.8 m s−2 is the gravitational acceleration, R = 287 J kg−1 K−1 is the gas constant for dry air, and T is the mean temperature in the |Δz| deep (hypothetical) atmospheric layer, that is, . Equation (14) is based on the assumption that the atmosphere in the |Δz| deep layer is in hydrostatic balance, which is well founded, since synoptic- and large-scale motions of the real atmosphere are hydrostatic to a good approximation. Hydrostatic balance is also built into the equations of all models that have resolutions lower than about 10 km.

If the bias from the surface pressure background was removed using bias model I, the surface pressure analysis would have a value consistent with the real orography. This would induce an adjustment process of the surface pressure, and of the other state variables through their dynamical relation to the surface pressure, to the model orography in the forecast phase of the next analysis cycle. In bias model II, in contrast, the surface pressure background, which reflects the model orography, is left unchanged. Thus, accounting for the surface pressure bias is an ideal test problem to investigate the efficiency of bias model II.

A careful treatment of the surface pressure is especially important in models that use σ = p/psmodel as the vertical model coordinate. Since such models represent the atmospheric variables at constant levels of σ, the pressure p at a given model level is spatiotemporally varying, following the changes in the surface pressure. Since the vertical position of the observations is typically given by the pressure p at the location of the observations, in a σ-coordinate model, the application of the observation operator involves two steps: first, the pressure is computed at each σ level multiplying σ by the surface pressure, then, the model state is interpolated to the observational location in pressure units.

Since our goal is to account only for the surface pressure component of the background bias, we choose Eq. (13) to define the observation operator h for the surface pressure. Using Eq. (12) would change the pressure p at the sigma levels before the vertical interpolation to the observation location. This would have undesirable effects on variables other than the surface pressure, since the model levels are defined according to the biased model surface pressure. The alternative solution would be to apply Eq. (12) to the full-state vector, instead of the surface pressure component, because then the bias component of the other variables would adjust those variables to the bias-corrected surface pressure in the computation of the h. This approach would be computationally more expensive than the one we pursue as it would require the estimation of many more bias components.

To evaluate the efficiency of our surface pressure bias-correction strategy, we compare its performance to the more traditional approach of using Eq. (14) to make the correction. That is, Eq. (12) is replaced by

 
formula

for the surface pressure observations. This is approximately equivalent to choosing cb according to Eq. (A2). This use of the hydrostatic balance equation is different from the approach we and others used in earlier implementations of ensemble-based data assimilation systems (e.g., Whitaker et al. 2004, 2008; Szunyogh et al. 2008). In those systems the adjustment was made to the surface pressure observations and the increased uncertainty introduced by this adjustment was taken into account by modifying the observational error for the adjusted observations.

e. Accounting for the surface pressure model bias in the LETKF

In this section we discuss how to incorporate the bias correction into the LETKF algorithm at time tn, and so now we drop the subscript n. In the LETKF algorithm, the components of the analysis vector xa are obtained model grid point by grid point. We introduce the notation x for the local state vector, whose components are the L model variables at a given grid point. The LETKF obtains the local analysis xa by adding a linear combination of the background ensemble perturbations, xb(k)xb for k = 1, … , K, to the background state xb, which is defined by the ensemble mean of the K-member background ensemble, xb(k) for k = 1, … , K. [Hereafter, we denote the ensemble mean of a vector over the K-member ensemble by dropping the superscript (k) that refers to the kth ensemble member.] That is,

 
formula

Here, 𝗫b is the L × K matrix of the background ensemble perturbations, whose kth column is xb(k)xb. The analysis is obtained by computing the “weight vectors” wa based on the error statistics and the observations. The LETKF assimilates all observations that are thought to have useful information about the state at the grid point where the state is estimated. We use the notations M for the number of observations selected for use at location ℓ, yo for the vector of selected observations, and 𝗥 for the associated M × M submatrix of 𝗥. We also introduce the following notation:

 
formula

We use the local components of yb(k) to define the ensemble yb(k), k = 1, … , K, and the M × K matrix 𝗬b whose kth column is yb(k)yb.2 With this notations, the weight vector wa is computed by

 
formula

where

 
formula

The LETKF also computes the analysis error covariance matrix

 
formula

to obtain an estimate of the uncertainty in the analysis xa. Then, 𝗣a is used to compute the ensemble of “weight” vectors wa(k), where k = 1, … , K, to obtain the analysis ensemble

 
formula

which represents the analysis uncertainty. Finally, we apply a multiplicative variance inflation factor ρ (ρ > 0) to the global analysis ensemble perturbations δxa(k) = xa(k)xa, for k = 1, … , K. The analysis ensemble that provides the initial conditions for the forecast step of the next analysis cycle is obtained by adding the inflated analysis perturbations

 
formula

to the mean analysis xa. This variance inflation strategy is different from the one used in the original formulation of the LETKF by Hunt et al. (2007). In that paper, a multiplicative variance inflation was applied while computing a, dividing the identity matrix 𝗜 by 1 + ρ (ρ > 0). We made this alteration to the algorithm, because this way can easily apply different variance inflation factor to the state vector components and the bias-correction components of the augmented state vector.

In this paper we use four different configurations of the LETKF:

  1. We proceed with the computation as given by Eqs. (16)(22). In these experiments we account for the model bias by tuning the variance inflation coefficient ρ.

  2. We implement the hydrostatic-balance-based correction of the observation operator by using Eq. (15) to define the observation operator.

  3. We incorporate bias model II and the method of state augmentation by replacing the state vector x with the augmented state vector z: at locations ℓ where no surface pressure observations are assimilated, the local augmented state vector z is identical to x, while at locations where surface pressure observations are assimilated, the L vector x is replaced with the (L + Mps) vector z. Here, Mps is the number of surface pressure observations assimilated at location ℓ. Also, at locations where no surface pressure observations are assimilated, the observation operator h in Eq. (17) is the interpolation operator ĥ, but at locations where surface pressure observations are assimilated, those components of h that map the model state vector to surface pressure are defined by Eq. (13). In this configuration, we apply different variance inflation coefficients to the surface pressure components of the state vector, to the other components of the state vector, and to the bias components.

  4. We use an observation operator that combines the hydrostatic-balance-based correction and bias model II: 
    formula

3. Experiment design

The true evolution of the atmosphere is simulated by running the NCEP model for 3 months (91 days), starting from the operational NCEP analysis, after truncating it to T62L28 resolution, at 0000 UTC 1 January 2004. Unlike the real atmosphere, for which the state vector is a spatially continuous vector field and hence is infinite dimensional, our simulated true atmospheric states are finite dimensional. Notwithstanding, our simulated atmosphere is still much higher dimensionally than the model we use for data assimilation. For the sake of simplicity, we keep the notation un for the state of the simulated true atmosphere at time tn.

a. Simulated observations

Observations are generated with 6-h frequency at 0000, 0600, 1200, and 1800 UTC. The horizontal locations of the observations are chosen to coincide with the horizontal locations of the grid points for the simulated atmosphere. This choice is made to avoid introducing interpolation errors into the knowledge of un. Observations are generated at only selected grid points by the following procedures: the locations of the surface pressure observations in a typical 0000 UTC operational NCEP data file are identified, then, observations are generated at grid points that are the closest to one of the observations. Only one observation per grid point is created when the grid point is the closest one for multiple observation locations. The density of the observations is the highest over the continents and the lowest over the oceans and in the two polar regions (Fig. 1). In addition to the surface pressure observations, observations of the horizontal wind vector and the virtual temperature are generated at seven pressure levels (at 925, 850, 700, 500, 300, 200, and 100 hPa). The observation noise has zero mean for all observed quantities and standard deviation of 1 hPa for the surface pressure, 1 K for the temperature, and 1 m s−1 for the wind components.

Fig. 1.

Location of the simulated observations are marked by dots. Also shown is the model grid in the SPEEDY model.

Fig. 1.

Location of the simulated observations are marked by dots. Also shown is the model grid in the SPEEDY model.

b. Ensemble configuration

We obtain estimates of the atmospheric state (analysis) every 6 h; hence, f(xn−1) is a 6-h forecast. We account for the model bias only in the 6-h surface pressure forecast, but we also hope to see a reduction of the bias in the 6-h forecast of the other variables due to the coupling between the surface pressure and the other variables through the dynamics. The initial ensemble of atmospheric states x0b(k), for k = 1, … , 60, at time t0 is obtained by sampling a time series of states from a free run with the SPEEDY model. This generation of the initial forecast ensemble ensures that the ensemble members are on the model attractor.

In the experiments where we use bias model II to account for the model bias, the initial ensemble of the bias estimates is a realization of Gaussian random noise with zero mean and 1-hPa standard deviation. Then the augmented initial ensemble z0b(k) is formed by augmenting the initial atmospheric states by the initial bias estimates. We start our data assimilation cycle at time t0 = 0000 UTC 1 January 2004 by assimilating the observations y0o into the initial ensemble z0b(k) to generate the analysis ensemble z0a(k).

c. Localization approach

In the LETKF, we have the freedom to choose any subset of the observations for the estimation of a given state variable. In a practical implementation, it is useful to define a local volume for each model grid point P and analyze all (local) state vector components at P simultaneously using all observations from the local volume. In our earlier studies (e.g., Szunyogh et al. 2005, 2008) we found that the computationally most efficient approach was to choose the volume of the local region to be as small as possible without degrading the accuracy of the analyses.3 We found by numerical experimentations (results not shown), that the optimal definition of the local region is 3 × 3 × 1 grid points when no adaptive bias correction is applied and 5 × 5 × 1 grid points when the state vector is augmented with the surface pressure bias-correction components.

d. Interpolation operator

The interpolation operator ĥ interpolates the model state to the observation locations in both the horizontal and vertical directions. The horizontal interpolation is implemented with a simple bilinear interpolation considering the model state vector components from the four grid points nearest to the given observational location. The vertical interpolation is somewhat more complicated because of the use of sigma as the model’s vertical coordinate: for each ensemble member, we first calculate the pressure at the sigma levels, multiplying sigma by the background surface pressure of the given ensemble member; the seven sigma levels define seven sigma layers, where the lowest layer is defined as the region between the surface (where sigma is 1) and the lowest sigma level; we find the sigma layer that contains the observation; finally we linearly interpolate using the logarithm of the pressure values at the bottom and top of the sigma layer. Since the surface pressure is part of the state vector, ĥ is nonlinear.

For the surface pressure observations, ĥ represents the horizontal interpolation of the model surface pressure to the observation locations. To obtain h for the surface pressure observation, we add the appropriate bias-correction term to the result of ĥ.

e. Verification method

To assess the effect of surface pressure bias correction on the analysis, we compute error statistics for the 6-h forecasts that provide the background. There are two reasons to verify the background instead of the analysis. First, since our goal is to account for the bias in the background, verifying the surface pressure provides a direct measure of our success. Second, the accuracy of the 6-h forecast is a better indicator of the quality of the analysis than the accuracy of the analysis itself, as it shows improvements only if the analysis is improved in a way that is consistent with the model dynamics. For instance, improving the analysis by increasing the projection of the initial state on free gravity waves would typically lead to a degradation or no change in the quality of the 6-h forecast.

We define all error statistics in observation space, that is, we compare the interpolated 6-h forecasts to the known state of the simulated atmosphere un at the observation locations. Thus, the 6-h forecast error enb is defined by

 
formula

for the wind (u, υ) and the temperature (T) components of the state vector. For the surface pressure component (ps) of the state vector, the forecast error is defined by replacing h(xnb) with the proper corrected observation operator [h(xnb, cn), h(xnb, Δz), or h(xnb, Δz, cn)] for the given configuration of the LETKF. Then, at a given time tn and vertical level l we obtain the root-mean-square (rms) of the forecast error by computing the root-mean-square of those components of enb that represent the given atmospheric variable at level l. For example, the rms of the temperature forecast error is computed by

 
formula

where the superscript T refers to the matrix transpose, M is the number of temperature observations at vertical level l, and enb(T, l) is composed of those components of enb that measure the temperature error at the observation locations at level l and time tn. The time mean of the root-mean-square error is computed by

 
formula

where N0 is the index of the analysis time tN0 at which we start collecting the error statistics, Nf is the index of the analysis time tNf at which we stop collecting the error statistics, and N = NfN0 + 1. In the computation of the error statistics, we do not consider the results from the first 15 days (N0 = 61) to prevent transient effects from influencing the verification results. Since we run the experiments for 3 months, Nf = 364. Likewise, the bias is computed by

 
formula

where enb (T, l, m), m = 1, … , M are the components of enb(T, l). To measure the amplitude of the bias for vertical level ℓ we obtain the rms value of the bias, which is computed by

 
formula

4. Results

In this section, we compare the performance of our four different configurations of the LETKF, which were described in section 2e. First, we find the optimal variance inflation for the case, where no bias correction is performed by numerical experimentation. Then we use the optimal value of the variance inflation coefficient to produce a baseline of the error statistics to compare with the results of our strategies to account for surface pressure bias.

a. The case of no bias correction

The rms error and the bias for the temperature [rms (T, l) and bias (T, l)] are shown in Figs. 2 and 3, while the same statistics for the zonal component of the wind vector [rms (u, l) and bias (u, l)] are shown in Figs. 4 and 5. All four error statistics take their minimum value where the covariance inflation factor ρ is about 0.25–0.3. This is also the range of the covariance inflation where the surface pressure bias has its minimum (Fig. 6).

Fig. 2.

The global root-mean-square error for the temperature [rms (T, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 2.

The global root-mean-square error for the temperature [rms (T, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 3.

The global bias for the temperature [bias (T, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 3.

The global bias for the temperature [bias (T, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 4.

The global root-mean-square error for the zonal component of the wind vector [rms (u, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 4.

The global root-mean-square error for the zonal component of the wind vector [rms (u, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 5.

The global bias for the zonal component of the wind vector [bias (T, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 5.

The global bias for the zonal component of the wind vector [bias (T, l)]. The symbols for the different values of the covariance inflation coefficient are given in the figure legend.

Fig. 6.

The dependence of the global surface pressure bias on the covariance inflation factor.

Fig. 6.

The dependence of the global surface pressure bias on the covariance inflation factor.

In what follows, we use the results obtained at ρ = 0.25 as the baseline. In those configurations of the LETKF where we account for the surface pressure bias with bias model II the covariance inflation factor is 0.08 for the bias-correction term and for the surface pressure component of the state vector, and 0.25 for the other variables. These values were determined to be optimal by numerical experiments similar to those we presented here for the no bias-correction case.

As can be expected based on Eq. (A3), the spatial distribution of the surface pressure bias (Fig. 7) strongly resembles the spatial distribution of the orography difference between the model and the true orography (Fig. 8), with a strong anticorrelation between the two fields. (The correlation between the two fields is −0.76.)

Fig. 7.

Spatial distribution of the surface pressure bias for covariance inflation factor ρ = 0.25.

Fig. 7.

Spatial distribution of the surface pressure bias for covariance inflation factor ρ = 0.25.

Fig. 8.

Spatial distribution of the difference between the model orography and the “true” orography (Δz).

Fig. 8.

Spatial distribution of the difference between the model orography and the “true” orography (Δz).

b. The effect of bias correction on the surface pressure forecasts

The effect of the different bias-correction strategies on the surface pressure root-mean-square error is shown in Fig. 9. While all bias-correction strategies lead to large reduction of the surface pressure root-mean-square error, the strategy that is consistently the best is the one that combines bias model II with the hydrostatic-balance-based correction. In addition, the hydrostatic-balance-based correction is more efficient, on global average, in reducing the surface pressure error than bias model II alone.

Fig. 9.

Time evolution of the surface pressure root-mean-square error using different strategies to correct for the surface pressure bias. The strategies that produced the different curves are identified by the labels placed above the curves.

Fig. 9.

Time evolution of the surface pressure root-mean-square error using different strategies to correct for the surface pressure bias. The strategies that produced the different curves are identified by the labels placed above the curves.

The spatial distribution of the surface pressure bias for the different bias-correction strategies suggests a simple explanation for these results. Bias model II is efficient in correcting the bias at locations where the observation density is high (e.g., over North America, Eurasia, and Australia in Fig. 10), while the hydrostatic-balance-based correction can correct a reasonably large portion of the bias at all locations where the difference between the model orography and the true orography is large (Fig. 11). Combining the two bias-correction strategies, often leads to improvements which are larger than the sum of the improvements from the two approaches. Our example for this is the region of the Himalayas (Fig. 12). All three bias-correction strategies reduce the anticorrelation between the surface pressure bias and the orography difference: for the adaptive correction the correlation is −0.56, for the hydrostatic-balance-based correction the correlation is −0.31, and for the combined correction strategy the correlation is −0.20. This result suggests that all three strategies reduce that part of the surface pressure bias that is directly related to the reduced resolution representation of the orography in the model, and the one that does this the most efficiently is the strategy that combines bias model II and the hydrostatic-balance-based correction.

Fig. 10.

Spatial distribution of the surface pressure bias when the bias is corrected by the adaptive bias-correction term.

Fig. 10.

Spatial distribution of the surface pressure bias when the bias is corrected by the adaptive bias-correction term.

Fig. 11.

Spatial distribution of the surface pressure bias when the bias is corrected based on the hydrostatic balance equation.

Fig. 11.

Spatial distribution of the surface pressure bias when the bias is corrected based on the hydrostatic balance equation.

Fig. 12.

Spatial distribution of the surface pressure bias when the adaptive bias-correction term and the hydrostatic-balance-based correction terms are combined.

Fig. 12.

Spatial distribution of the surface pressure bias when the adaptive bias-correction term and the hydrostatic-balance-based correction terms are combined.

c. The effect of bias correction on the temperature and wind forecasts

The correction of the observation operator for the surface pressure model bias leads to a large improvement, not only in the surface pressure forecast, but also in the temperature and wind forecasts at higher altitudes. In particular, both the hydrostatic-balance-based correction and the adaptive correction lead to a reduction of the wind forecast bias (Fig. 13), and the root-mean-square error in the wind forecast (Fig. 14). Interestingly, the adaptive bias correction leads to much larger improvements than the hydrostatic-balance-based correction, and when the two approaches are combined the improvement is similar to that achieved by the adaptive correction alone. The situation is somewhat different for the temperature. This variable behaves similarly to the wind at and below the 300-hPa levels, but at the higher model levels the adaptive bias correction increases the bias (Fig. 15), and consequently, the root-mean square error (not shown). This behavior of the global bias is due to problems in the tropics (Fig. 16), while the temperature in the extratropics (not shown) behaves similarly to the wind.

Fig. 13.

The global bias for the zonal component of the wind vector [bias (u, l)]. The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 13.

The global bias for the zonal component of the wind vector [bias (u, l)]. The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 14.

The global root-mean-square error for the zonal component of the wind vector [rms (u, l)]. The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 14.

The global root-mean-square error for the zonal component of the wind vector [rms (u, l)]. The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 15.

The global bias for the temperature [bias (T, l)]. The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 15.

The global bias for the temperature [bias (T, l)]. The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 16.

The bias for the temperature [bias (T, l)] in the tropics (30°S–30°N). The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

Fig. 16.

The bias for the temperature [bias (T, l)] in the tropics (30°S–30°N). The symbols that mark the results for the different bias-correction strategies are explained in the figure legend.

A more careful examination of the cause of the increased temperature bias in the tropics reveals that the unusual behavior of the bias correction in the tropics is closely related to atmospheric tides. Atmospheric tides are temporal oscillations in one or more atmospheric variables whose periods are integer fractions of a solar or a lunar day. Atmospheric tides related to the solar day are primarily induced by the absorption of solar radiation by ozone in the stratosphere (Chapman and Lindzen 1970). The NCEP GFS, which has ozone as a prognostic variable, maintains both a diurnal and a semidiurnal tidal wave in the surface pressure in the tropics, but SPEEDY, which uses an empircal, seasonally varying function to define the absorption of solar radiation by the ozone in the stratosphere cannot simulate the tides (Fig. 17). This difference between the dynamics of the two models introduces a bias into the surface pressure background in the tropics.

Fig. 17.

Fourier spectrum of the surface pressure at the equator at 160°W in the free runs with the NCEP GFS (red line) and the SPEEDY model (black line).

Fig. 17.

Fourier spectrum of the surface pressure at the equator at 160°W in the free runs with the NCEP GFS (red line) and the SPEEDY model (black line).

The diurnal oscillation is sufficiently slow to be captured, with a reduced amplitude, by our bias model. Since bias model II forces the state estimate closer to the model attractor, the assimilation using this bias model dampens the diurnal tide in the analysis more than the assimilation that does not use adaptive bias correction (Fig. 18). This allows the background to stay closer to the model attractor in the entire atmospheric model column, which results in a smaller correction of the upper-level temperature in the direction of the observation.

Fig. 18.

Fourier spectrum of the surface pressure analysis at the equator at 160°W using no bias-correction (blue line) and adaptive bias correction (black line). For reference, the same spectrum is also shown for the free run with the NCEP GFS (red line).

Fig. 18.

Fourier spectrum of the surface pressure analysis at the equator at 160°W using no bias-correction (blue line) and adaptive bias correction (black line). For reference, the same spectrum is also shown for the free run with the NCEP GFS (red line).

This effect is illustrated by Fig. 19, which we obtain by computing the difference between the mean temperature in a free run with the SPEEDY model and a free run with the NCEP GFS for the 3-month period we investigate in this study. (To be precise, the free run with the NCEP GFS is the run that simulates the true evolution of the atmosphere, while the free run with the SPEEDY is started from the analysis at 0000 UTC 1 January 2004, which provided the initial condition for the NCEP GFS model run.) In essence, this field, which is shown by contours in Fig. 19, is a representation of the difference between the attractors of the two models. We also show in the same figure by color shades the 6-h forecast bias for the experiment that uses bias model II to correct for the surface pressure model bias. We find a close correspondence between the shapes of the two fields. Most importantly, in the upper troposphere in the tropics, both fields indicate a strong positive temperature bias. This supports our statement that bias model II leads to a larger temperature error in that region, because it allows the analysis, and the ensuing model forecast to shift in the direction of the model attractor.

Fig. 19.

The vertical-zonal cross section of the temperature bias for the configuration using only bias model II (color shades) and of the time-mean difference in the temperature for a 90-day free run with the SPEEDY model and the true states (contours).

Fig. 19.

The vertical-zonal cross section of the temperature bias for the configuration using only bias model II (color shades) and of the time-mean difference in the temperature for a 90-day free run with the SPEEDY model and the true states (contours).

5. Conclusions

In this study, we evaluated the performance of bias model II introduced in Baek et al. (2006) to account for the model bias in an ensemble based data assimilation scheme. We carried out experiments in an idealized setting and focused on accounting for the bias in one particular model state variable, the surface pressure. This variable was chosen, because it represents the scenario we envisioned when we derived bias model II: correction of the forecast state for the bias would move the state estimate farther away from the model attractor. Applying bias model II to a single model variable required a slight modification of the bias-correction formula: while in the original formula of Baek et al. (2006) the bias correction was applied before mapping the model state to observation space with the observation operator, here we applied the correction in observation space to the image of the observation operator. The key findings of the paper are the following:

  • Bias model II is an efficient algorithm to account for the model bias in well-observed regions of the atmosphere.

  • In regions of sparse observations, a bias-correction term derived on physical considerations can be expected to outperform bias model II. Combining the adaptive bias model II and the physics-based correction provides the best overall result.

  • Correction of the surface pressure bias with bias model II leads to improvements of the wind and temperature forecasts that provide the background. The only exception to this statement is the temperature in the upper troposphere in the tropics.

Based on our findings, we expect that implementing a similar correction of the surface pressure bias in an ensemble-based data assimilation system for the assimilation of observations of the real atmosphere would lead to improvements of analyses and short-term forecast.

Acknowledgments

This work was supported by NSF Grant ATM0722721 and the NASA Mars Fundamental Research Program Grant NNX07AV45G.

REFERENCES

REFERENCES
Anderson
,
J. L.
,
2001
:
An ensemble adjustment Kalman filter for data assimilation.
Mon. Wea. Rev.
,
129
,
2884
2903
.
Baek
,
S. J.
,
B. R.
Hunt
,
E.
Kalnay
,
E.
Ott
, and
I.
Szunyogh
,
2006
:
Local ensemble Kalman filtering in the presence of model bias.
Tellus
,
58A
,
293
306
.
Bell
,
M. J.
,
M. J.
Martin
, and
N. K.
Nichols
,
2004
:
Assimilation of data into an ocean model with systematic errors near the equator.
Quart. J. Roy. Meteor. Soc.
,
130
,
873
893
.
Bishop
,
C. H.
,
B. J.
Etherton
, and
S. J.
Majumdar
,
2001
:
Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects.
Mon. Wea. Rev.
,
129
,
420
436
.
Burgers
,
G.
,
P. J.
van Leeuwen
, and
G.
Evensen
,
1998
:
Analysis scheme in the ensemble Kalman filter.
Mon. Wea. Rev.
,
126
,
1719
1724
.
Carton
,
J. A.
,
G.
Chepurin
, and
X.
Cao
,
2000
:
A Simple Ocean Data Assimilation analysis of the global upper ocean 1950–95. Part I: Methodology.
J. Phys. Oceanogr.
,
30
,
294
309
.
Chapman
,
S.
, and
R. S.
Lindzen
,
1970
:
Atmospheric Tides.
Gorden and Breach, 200 pp
.
Chepurin
,
G. A.
,
J. A.
Carton
, and
D.
Dee
,
2005
:
Forecast model bias correction in ocean data assimilation.
Mon. Wea. Rev.
,
133
,
1328
1342
.
Dee
,
D. P.
, and
A. M.
Da Silva
,
1998
:
Data assimilation in the presence of forecast bias.
Quart. J. Roy. Meteor. Soc.
,
124
,
269
295
.
Dee
,
D. P.
, and
R.
Todling
,
2000
:
Data assimilation in the presence of forecast bias: The GEOS moisture analysis.
Mon. Wea. Rev.
,
128
,
3268
3282
.
Derber
,
J. C.
,
1989
:
A variational continuous assimilation technique.
Mon. Wea. Rev.
,
117
,
2437
2446
.
Drécourt
,
J.
,
H.
Madsen
, and
D.
Rosbjerg
,
2006
:
Bias aware Kalman filters: Comparison and improvements.
Adv. Water Resour.
,
29
,
707
718
.
Friedland
,
B.
,
1969
:
Treatment of bias in recursive filtering.
IEEE Trans. Automat. Contrib.
,
14
,
359
367
.
Hamill
,
T. M.
,
J. S.
Whitaker
, and
C.
Snyder
,
2001
:
Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter.
Mon. Wea. Rev.
,
129
,
2776
2790
.
Houtekamer
,
P. L.
, and
H. L.
Mitchell
,
1998
:
Data assimilation using an ensemble Kalman filter technique.
Mon. Wea. Rev.
,
126
,
796
811
.
Houtekamer
,
P. L.
, and
H. L.
Mitchell
,
2001
:
A sequential ensemble Kalman filter for atmospheric data assimilation.
Mon. Wea. Rev.
,
129
,
123
137
.
Hunt
,
B. R.
,
E. J.
Kostelich
, and
I.
Szunyogh
,
2007
:
Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter.
Physica D
,
230
,
112
126
.
Keppenne
,
C. L.
,
M. M.
Rienecker
,
N. P.
Kurkowski
, and
D. A.
Adamec
,
2005
:
Ensemble Kalman filter assimilation of temperature and altimeter data with bias correction and application to seasonal prediction.
Nonlinear Processes Geophys.
,
12
,
491
503
.
Lamarque
,
J. F.
, and
Coauthors
,
2004
:
Application of a bias estimator for the improved assimilation of Measurements of Pollution in the Troposphere (MOPITT) carbon monoxide retrievals.
J. Geophys. Res.
,
109
,
D16304
.
doi:10.1029/2003JD004466
.
Lorenz
,
E. N.
, and
K. A.
Emanuel
,
1998
:
Optimal sites for supplementary weather observations: Simulation with a small model.
J. Atmos. Sci.
,
55
,
399
414
.
Martin
,
M. J.
,
M. J.
Bell
, and
N. K.
Nichols
,
2002
:
Estimation of systematic error in an equatorial ocean model using data assimilation.
Int. J. Numer. Methods Fluids
,
40
,
435
444
.
Ott
,
E.
, and
Coauthors
,
2004
:
A local ensemble Kalman filter for atmospheric data assimilation.
Tellus
,
56A
,
415
428
.
Szunyogh
,
I.
,
E. J.
Kostelich
,
G.
Gyarmati
,
D. J.
Patil
, and
B. R.
Hunt
,
2005
:
Assessing a local ensemble Kalman filter: Perfect model experiments with the NCEP global model.
Tellus
,
57A
,
528
545
.
Szunyogh
,
I.
,
E. J.
Kostelich
,
G.
Gyarmati
,
E.
Kalnay
,
B. R.
Hunt
,
E.
Ott
, and
J. A.
Yorke
,
2008
:
A local ensemble transform Kalman filter data assimilation system for the NCEP global model.
Tellus
,
60A
,
113
130
.
Toth
,
Z.
, and
M.
Peña
,
2007
:
Data assimilation and numerical forecasting with imperfect models: The mapping paradigm.
Physica D
,
230
,
146
158
.
Whitaker
,
J. S.
, and
T. M.
Hamill
,
2002
:
Ensemble data assimilation without perturbed observations.
Mon. Wea. Rev.
,
130
,
1913
1924
.
Whitaker
,
J. S.
,
G. P.
Compo
,
X.
Wei
, and
T. M.
Hamill
,
2004
:
Reanalysis without radiosondes using ensemble data assimilation.
Mon. Wea. Rev.
,
132
,
1190
1200
.
Whitaker
,
J. S.
,
T. M.
Hamill
,
X.
Wei
,
Y.
Song
, and
Z.
Toth
,
2008
:
Ensemble data assimilation with the NCEP Global Forecast System.
Mon. Wea. Rev.
,
136
,
463
482
.
Zupanski
,
D.
, and
M.
Zupanski
,
2006
:
Model error estimation employing an ensemble data assimilation approach.
Mon. Wea. Rev.
,
134
,
1337
1354
.
Zupanski
,
M.
,
2005
:
Maximum likelihood ensemble filter: Theoretical aspects.
Mon. Wea. Rev.
,
133
,
1710
1726
.

APPENDIX

Hydrostatic-Balance-Based Modification of the Observation Operator

In principal, Eq. (14) provides a straightforward definition of the observation operator of pressure:

 
formula

which can also be written as

 
formula

using the Taylor series expansion of the the exponential function. We assimilate only those observations for which the difference h(xnb) − ĥ(xnb) is not larger than 10 hPa [which is about 1% of the value of ĥ(xnb)]. Thus, it is sufficient to retain only the first-order expansion term on the right-hand side of Eq. (A2), and the observation operator defined by Eq. (A1) is approximately the same as Eq. (13) if we choose the bias-correction factor to be a constant:

 
formula

The main difficulty with using Eq. (A1) or Eq. (13) combined with Eq. (A3) is that the mean temperature T of the hypothetical atmospheric layer is unknown. In practice, T is typically computed from the climatological value of the lapse rate, γ = −∂T/∂z, assuming that the temperature decreases linearly in the |Δz| deep layer:

 
formula

Here, Ts is the temperature near the model surface. When Ts is not available directly from the model, it can be estimated by an extrapolation from the lowest model level: using the hydrostatic balance equation and assuming that the temperature profile between the lowest model level and the surface is linear with a lapse rate γ1, the surface temperature can be computed as

 
formula

where σ1 and T1 are the values of sigma and the temperature, respectively, at the lowest model level. In this paper we assume that γ = γ1 = 6.5 × 10−3 km−1. While an ensemble-based data assimilation scheme can be expected to correctly account for the uncertainty in the exponential correction factor because of the uncertainty in T1, or in Ts (if it is directly available from the model), the uncertainties in the lapse rates γ and γ1 and the temperature profiles can be accounted for by a different approach (e.g., by an inflation of the error variance for the surface pressure observations).

Footnotes

* Current affiliation: Meteorological Research Division, Environment Canada, Dorval, Quebec, Canada.

+ Current affiliation: Department of Atmospheric Sciences, Texas A&M University, College Station, Texas.

Corresponding author address: Istvan Szunyogh, Department of Atmospheric Sciences, Texas A&M University, 3150 TAMU, College Station, TX 77843-3150. Email: szunyogh@ariel.met.tamu.edu

This article included in the Mathematical Advances in Data Assimilation (MADA) special collection.

1

For a state-of-the-art numerical weather prediction system, the number of model variables is ∼O(108), while the number of assimilated observations is ∼O(106–107).

2

Notice that while the components of the local state vector x are defined by the L components of the state vector x at grid point ℓ, the M components of y are defined by the interpolated state at the location of the observations that are selected for assimilation at grid point ℓ.

3

A small region reduces the computational cost both directly (by reducing the dimensionality of the matrix calculations) and indirectly (by reducing the number of ensemble members required to represent the uncertainties in the local region).