Abstract

The purpose of the present study is to explore the feasibility of estimating and correcting systematic model errors using a simple and efficient procedure, inspired by papers by Leith as well as DelSole and Hou, that could be applied operationally, and to compare the impact of correcting the model integration with statistical corrections performed a posteriori. An elementary data assimilation scheme (Newtonian relaxation) is used to compare two simple but realistic global models, one quasigeostrophic and one based on the primitive equations, to the NCEP reanalysis (approximating the real atmosphere). The 6-h analysis corrections are separated into the model bias (obtained by time averaging the errors over several years), the periodic (seasonal and diurnal) component of the errors, and the nonperiodic errors. An estimate of the systematic component of the nonperiodic errors linearly dependent on the anomalous state is generated.

Forecasts corrected during model integration with a seasonally dependent estimate of the bias remain useful longer than forecasts corrected a posteriori. The diurnal correction (based on the leading EOFs of the analysis corrections) is also successful. State-dependent corrections using the full-dimensional Leith scheme and several years of training actually make the forecasts worse due to sampling errors in the estimation of the covariance. A sparse approximation of the Leith covariance is derived using univariate and spatially localized covariances. The sparse Leith covariance results in small regional improvements, but is still computationally prohibitive. Finally, singular value decomposition is used to obtain the coupled components of the correction and forecast anomalies during the training period. The corresponding heterogeneous correlation maps are used to estimate and correct by regression the state-dependent errors during the model integration. Although the global impact of this computationally efficient method is small, it succeeds in reducing state-dependent model systematic errors in regions where they are large. The method requires only a time series of analysis corrections to estimate the error covariance and uses negligible additional computation during a forecast. As a result, it should be suitable for operational use at relatively small computational expense.

1. Motivation

Numerical weather forecasting errors grow with time as a result of two contributing factors. First, atmospheric instabilities amplify uncertainties in the initial conditions, causing indistinguishable states of the atmosphere to diverge rapidly on small scales. This phenomenon is known as internal error growth. Second, model deficiencies introduce errors during the model integration leading to external error growth. These deficiencies include inaccurate forcings and parameterizations used to represent the effect of subgrid-scale physical processes as well as approximations in numerical differentiation and integration, and result in large-scale systematic forecast errors. Current efforts to tackle internal error growth focus on improving the estimate of the state of the atmosphere through assimilation of observations and ensemble forecasting (Anderson 2001; Whitaker and Hamill 2002; Ott et al. 2004; Hunt et al. 2004). Ideally, model deficiencies should be addressed by generating more accurate approximations of the forcing, improving the physical parameterizations, or by increasing the grid density to resolve smaller-scale processes. However, unresolved phenomena and model errors will be present no matter how accurate the parameterizations are, no matter how fine the grid resolution becomes. As a result, it is important to develop empirical algorithms to correct forecasts to account for model errors. Empirical methods that consider the model a “black box” are particularly valuable because they are independent of the model. As the methods of data assimilation and generation of initial perturbations become more sophisticated and reduce the internal error, the impact of model deficiencies and their dependence on the “flow of the day” become relatively more important (Hamill and Snyder 2000; Houtekamer and Mitchell 2001; Kalnay 2003).

Estimates of the systematic model error may be derived empirically using the statistics of the short-term forecast errors, measured relative to a reference time series. For example, the mean short-term forecast error provides a sample estimate of the stationary component of the model error bias. The output of operational numerical weather prediction models is typically postprocessed to account for any such known biases in the forecast field by model output statistics (MOS; Glahn and Lowry 1972; Carter et al. 1989). However, offline bias correction has no dynamic effect on the forecast; internal and external errors are permitted to interact nonlinearly throughout the integration as they grow and eventually saturate. A more robust approach to error correction should be to estimate the short-term forecast errors as a function of the model state. A corresponding state-dependent correction would then be made every time step of the model integration to retard growth in the component of the error generated by the model deficiencies. Several studies have produced promising results by empirical correction in simulations using simple global circulation models (GCMs) with artificial model errors.

Leith (1978) proposed a statistical method to account for model bias and systematic errors linearly dependent on the flow anomalies. Leith derived a state-dependent empirical correction to a simple dynamical model by minimizing the tendency errors relative to a reference time series. Leith’s correction operator attempts to predict the error in the model tendency as a function of the model state. While Leith’s empirically estimated state-dependent correction term is only optimal for a linear model, it is shown to reduce the nonlinear model’s bias. However, the technique is subject to sampling errors and requires many orders of magnitude more computation time during the forecast than the biased model integration alone. The method is discussed in detail in section 6.

Faller and Schemm (1977) used a similar technique on coarse- and fine-grid versions of a modified Burgers equation model. Statistical correction of the coarse-grid model by multiple regression to parameterize the effects of subgrid-scale processes improved forecast skill. However, the model equations were found to be insensitive to small perturbations of the initial conditions. They concluded that the coarse-grid errors were due entirely to truncation and that the procedure was sensitive to sampling errors. Schemm et al. (1981) introduced two procedures for statistical correction of numerical predictions when verification data are only available at discrete times. Time interpolation was found to introduce errors into the regression equations, rendering the procedure useless. Applying corrections only when verification data were available, they were successful in correcting artificial model errors, but the procedure failed on the National Meteorological Center (NMC) barotropic-mesh model. Later, Schemm and Faller (1986) dramatically reduced the small-scale 12-h errors of the NMC model. Errors at the larger scales grew due to randomization of the residual errors by the regression equations.

Klinker and Sardeshmukh (1992) used January 1987 6-h model integrations to estimate the state-independent tendency error in operational European Centre for Medium-Range Weather Forecasts (ECMWF) forecasts. By switching off each individual parameterization, they isolated the contribution to the error of each term. They found that the model’s gravity wave parameterization dominated the 1-day forecast error. Saha (1992) used a simple Newtonian relaxation or nudging of a low-resolution version of the NMC operational forecast model to estimate systematic errors. Verifying against the high-resolution model, Saha was able to reduce systematic errors in independent forecasts by adding artificial sources and sinks to correct errors in heat, momentum, and mass. Nudging and a posteriori correction were seen to give equivalent forecast improvements.

By nudging of several low-resolution GCMs toward a high-resolution model, Kaas et al. (1999) estimated empirical orthogonal functions (EOFs) for horizontal diffusion. They found that the kinetic energy dissipation due to unresolved scales varied strongly with model resolution. The EOF corrections were most effective in reducing the climatological errors of the model whose resolution was closest to that of the high-resolution model. D’Andrea and Vautard (2000) estimated the time-derivative errors of the three-level global quasigeostrophic (QG) model of Marshall and Molteni (1993) by finding the model forcing that minimized the 6-h forecast errors relative to a reference time series. They derived a flow-dependent empirical parameterization from the mean tendency error corresponding to the closest analogues in the reference time series. The subsequent corrected forecasts exhibited improved climate statistics in the Euro–Atlantic region, but not in others.

DelSole and Hou (1999) perturbed the parameters of a two-layer QG model on an 8 × 10 grid (Ngp = 160 degrees of freedom) to generate a “nature” run and then modified it to create a “model” containing a primarily state-dependent error. They found that a state-independent error correction did not improve the forecast skill. By adding a state-dependent empirical correction to the model, inspired by the procedure proposed by Leith, they were able to extend forecast skill up to the limits imposed by observation error. However, Leith’s technique requires the solution of a Ngp-dimensional linear system. As a result, before the procedure can be considered useful for operational use, a low-dimensional representation of Leith’s empirical correction operator is required.

Renwick and Wallace (1995) used several low-dimensional techniques described by Bretherton et al. (1992) to identify predictable anomaly patterns in 14 winters of Northern Hemisphere 500-mb height fields. The most predictable anomaly pattern in ECMWF operational model forecasts was found to be similar to the leading EOF of the analyzed 500-mb height anomaly field. Applying canonical correlation analysis to the dependent sample (first seven winters), they found the amplitude of the leading pattern to be well predicted and showed the forecast skill to increase with the amplitude of the leading pattern. The forecast skill of the independent sample (second seven winters) was not well related to the patterns derived from the dependent sample. A posteriori statistical correction of independent sample forecasts slightly decreased RMS errors, but damped forecast amplitude considerably. They concluded that continuing model improvements should provide better results than statistical correction and skill prediction in an operational setting.

Ferranti et al. (2002) used singular value decomposition (SVD) (Golub and Van Loan 1996) analysis to identify the relationship between fluctuations in the North Atlantic Oscillation and ECMWF operational forecasts errors in 500-hPa height for seven winters in the 1990s. They found that the anomalous westerly (easterly) flow over the eastern North Atlantic (western Europe) was weakened by a consistent underestimation of the magnitude of pressure anomalies over Iceland. Large (small) error amplitudes were seen to be located in regions of the maximum westerly (easterly) wind anomaly; the trend was reversed on the flanks of the jet. The flow-dependent component of the errors accounted for 10% of the total error variance.

The purpose of the present study is to explore the feasibility of estimating and correcting systematic model errors using a simple and efficient procedure that could be applied operationally. The monthly, diurnal, and state-dependent components of the short-term forecast errors are estimated for two simple but realistic GCMs using the National Centers for Environmental Prediction (NCEP) reanalysis as truth. Section 2 describes the two GCMs used for the numerical experiments. Section 3 describes the simple method of data assimilation used to generate a time series of model forecasts and the technique used to estimate the corresponding systematic errors. Section 4 illustrates the substantial forecast improvement resulting from state-independent correction of monthly model forcing when verifying against independent data. Section 5 describes attempts to generate full-dimensional and low-order empirical estimates of model error as a function of the model state, using Leith’s method and a new computationally inexpensive approach based on SVD. The paper concludes with a discussion of implications for operational use and future directions of research.

2. Global circulation models

a. The quasigeostrophic model

The first model used in this study was developed by Marshall and Molteni (1993); it has been used for many climate studies (e.g., D’Andrea and Vautard 2000). The model is based on spherical harmonics, with triangular truncation at wavenumber 21. The QG model has three vertical levels (800, 500, and 200 hPa) and integrates the quasigeostrophic potential vorticity equation with dissipation and forcing:

 
formula

where ψ is the streamfunction and q is the potential vorticity (q ≈ ∇2ψ); J represents the Jacobian operator of ψ and q. The linear dissipation D is dependent on ψ and orography, and includes a relaxation coupling the three vertical levels. The forcing term S is time independent but varies spatially, representing the average effects of diabatic heating and advection by the divergent flow. This forcing is determined by requiring that the time-averaged values of the other terms in (1) are zero. In other words, the forcing is defined so the vorticity tendency is zero for the climatology (given by the mean NCEP reanalysis streamfunction during January and February from 1980 to 1990, the model simulates a perpetual winter). If the climatological streamfunction and vorticity are denoted as ψ̂ and , the time average of (1) can be written

 
formula

where the angle brackets are ensemble averages over time and primes represent deviations from this time average. The first two terms in (2) generate a mean state; the last term adds the average contribution of transient eddies (D’Andrea and Vautard 2000).

b. The SPEEDY model

The primitive equation model used in this study [known as SPEEDY (simplified parameterizations, primitive equation dynamics); Molteni 2003] has triangular truncation T30 at seven sigma levels (0.950, 0.835, 0.685, 0.510, 0.340, 0.200, and 0.080). The basic prognostic variables are vorticity (ζ), divergence (∇), absolute temperature (T), specific humidity (Q), and the logarithm of surface pressure [log(ps)]. These variables are postprocessed into zonal and meridional wind (u, υ), geopotential height (Z), T, Q, and log(ps) at pressure levels (925, 850, 700, 500, 300, 200, and 100 hPa). The model dissipation and time-dependent forcing are determined by climatological fields of sea surface temperature (SST), surface temperature, and moisture in the top soil layer (about 10 cm), snow depth, bare-surface albedo, and fractions of sea ice, land–sea, and land surface covered by vegetation. The model contains parameterizations of large-scale condensation, convection, clouds, shortwave and longwave radiation, surface fluxes, and vertical diffusion (Molteni 2003). No diurnal variation exists in the model forcing; forcing fields are updated daily.

Despite the approximations made in deriving each model, they produce realistic simulations of extratropical variability, especially in the Northern Hemisphere (Marshall and Molteni 1993; Molteni 2003). The SPEEDY model also provides a more realistic simulation of the Tropics, as well as the seasonal cycle. Since the model forcings (including SST) are determined by the climatology, one cannot expect realistic simulations of interannual variability. More advanced GCMs include not only observed SST but also changes in greenhouse gases and aerosols, as well as more advanced physical parameterizations. Despite the absence of variable forcing, if run for a long period of time (decades), both models reproduce a realistic climatology. While they were designed for climate simulations, each model produces forecasts that remain useful globally for about 2 days.

3. Training

A pair of simple schemes was used to estimate model errors. The schemes are advantageous in that they provide estimates of model errors at the analysis time, when they are still small and growing linearly, and because they can be carried out at the cost of essentially one model integration. The first procedure is inspired by Leith (1978), who integrated “true” initial conditions for 6 h to measure the difference between the forecast and the verifying analysis. A schematic illustrating the procedure, hereafter referred to as direct insertion, is shown in Fig. 1.

Fig. 1.

Schematic illustrating the direct insertion procedure for generating time series of model forecasts and analysis corrections. In the figure, xt(t) is the NCEP reanalysis at time and is used as an estimate of the truth; xf6(t + 1) is the 6-h forecast generated from the initial condition xt(t); and δxa6(t + 1) = xt(t) − xf6(t + 1) is the 6-h error correction or analysis increment in an operational setting.

Fig. 1.

Schematic illustrating the direct insertion procedure for generating time series of model forecasts and analysis corrections. In the figure, xt(t) is the NCEP reanalysis at time and is used as an estimate of the truth; xf6(t + 1) is the 6-h forecast generated from the initial condition xt(t); and δxa6(t + 1) = xt(t) − xf6(t + 1) is the 6-h error correction or analysis increment in an operational setting.

Writing x(t) for the GCM state vector at step t and M[x(t)] for the model tendency at step t, the model tendency equation is given by

 
formula

The analysis correction at step t is given by the difference between the truth xt(t) and the model forecast state xfh(t), namely,

 
formula

where h is the forecast lead time, typically h = 6 h.

The second (alternative) procedure for estimating model errors is Newtonian relaxation or nudging (Hoke and Anthes 1976; Leith 1991; Saha 1992), done by adding an additional forcing term to relax the model state toward the reference time series. When reference data are available (every 6 h), the tendency equation during nudging is given by

 
formula

At intermediate time steps, when data are unavailable, the tendency is given by (3). If the relaxation time scale τ is too large, model errors will grow before the time derivative can respond (Kalnay 2003, p. 141). If τ is chosen too small, the tendency equation will diverge. Figure 2 shows that the sensitivity of the assimilation error to τ for the QG and the SPEEDY models is similar, and that the optimal time scale is τ = 6 h, corresponding to the frequency (h) of the assimilation. This choice for τ generates analysis corrections whose statistical properties (e.g., mean, variance, EOFs) are qualitatively very similar to those obtained through direct insertion. As a result, for the remainder of the paper we will only consider time series generated by direct insertion.

Fig. 2.

Mean RMS error at 500 hPa as a function of relaxation time scale τ (relative to the interval h between observations of the reanalysis), verifying against reanalysis during relaxation. As expected, the optimal τ is equal to h for both the QG and SPEEDY models. However, nudging is successful for longer relaxation times as well.

Fig. 2.

Mean RMS error at 500 hPa as a function of relaxation time scale τ (relative to the interval h between observations of the reanalysis), verifying against reanalysis during relaxation. As expected, the optimal τ is equal to h for both the QG and SPEEDY models. However, nudging is successful for longer relaxation times as well.

The reference time series used to estimate model errors is given by the NCEP reanalysis. NCEP reanalysis values of model prognostic variables are available in 6-h corrections, they are interpolated to the model grid and denoted at step t by xt(t). Observations of the reanalysis are taken as truth with no added noise or sparsity; observational noise is the focus of much research in data assimilation (e.g., Ott et al. 2004), but its influence is ignored in this context since the reanalysis is already an approximation of the evolution of the atmosphere. Direct insertion is performed with the QG model by integrating NCEP reanalysis wintertime vorticity for the years between 1980 and 1990. The SPEEDY model is integrated using NCEP reanalysis values of ζ, ∇, T, Q, and log(ps) for the years between 1982 and 1986. A longer time period was used to train the QG model because it has an order of magnitude fewer degrees of freedom than the SPEEDY model.

The time series of analysis corrections is separated by month and denoted δxa6(t)Nreft=1 [Nref = 31 × 4 × 5 (days × 6-h intervals × years) for January training of SPEEDY]. The time average of the analysis corrections (bias) is given by 〈δxa6〉 = (1/Nref) ΣNreft=1 δxa6(t), and 〈xt〉 = (1/Nref) ΣNreft=1 xt(t) is the 5-yr reanalysis climatology for the month in which steps t = 1, . . . , Nref occur. The method of direct insertion is also used to generate 〈δxah〉 for h = 6j (j = 2, 3, . . . , 8), giving 12-h, 18-h, . . . , and 48-h mean bias estimates. These estimates will be used to make an a posteriori bias correction. The reanalysis states, model forecasts, and corresponding analysis corrections are then separated into their anomalous and time average components, namely,

 
formula
 
formula
 
formula

Figure 3 illustrates the bias calculated from 5 yr of 6-h SPEEDY forecasts of u, T, and Q for January and July. These state-independent errors are clearly associated with contrasts in land–sea forcing, topographic forcing, and the position of the jet. The zonal wind and temperature exhibit a large polar bias, especially in the winter hemisphere. The 6-h zonal wind errors show an underestimation of the westerly jets of 2–5 m s−1 east of the Himalayan mountain range (January) and east of the Australian Alps (July), especially on the poleward side. The mean temperature error over Greenland is larger during the Northern Hemisphere winter. There is little humidity bias in the polar regions, most likely due to the lack of moisture variability near the poles. The SPEEDY convection parameterization evidently transports too little moisture from lower levels (which are too moist) to upper levels (which are too dry). The following section describes attempts to correct the model forcing to account for this bias.

Fig. 3.

Mean 6-h analysis correction 〈δxa6〉 (shades) and 5-yr reanalysis climatology 〈xt〉 (contours) in SPEEDY forecasts of zonal velocity u (m s−1), temperature T (K), and specific humidity Q (g kg−1) at two levels during (left) January and (right) July from 1982 to 1986.

Fig. 3.

Mean 6-h analysis correction 〈δxa6〉 (shades) and 5-yr reanalysis climatology 〈xt〉 (contours) in SPEEDY forecasts of zonal velocity u (m s−1), temperature T (K), and specific humidity Q (g kg−1) at two levels during (left) January and (right) July from 1982 to 1986.

4. State-independent correction

a. Monthly bias correction

In this section, the impact of correcting for the bias of the model during the model integration is compared with a correction a posteriori, as done, for example, in MOS. In both cases the impact of the corrections on 5-day forecasts is verified using periods independent from the training periods. The initial conditions for QG forecasts are taken from the wintertime NCEP reanalysis data between 1991 and 2000, and for the SPEEDY forecasts are taken from the NCEP reanalysis data for 1987.

The control forecast is started from reanalysis initial conditions and integrated with the original biased forcing M(x). The forecast corrected a posteriori is generated by computing xf6(1) + 〈δxa6〉 at step 1, xf12(2) + 〈δxa12〉 at step 2, . . . , xf48(8) + 〈δxa48〉 at step 8, etc. The corrections in u, υ, T, Q, and log(ps) at all levels are obtained from the training period for each month of the year, and attributed to day 15 of each month. The correction is a daily interpolation of the monthly mean analysis correction; for example, on 1 February, the time-dependent 6-h bias correction is of the form

 
formula

so that the corrections are temporally smooth.

An online corrected or debiased model forecast is generated with the same initial condition, but with a corrected model forcing M+. The tendency equation for the debiased model forecast is given by

 
formula

where the bias correction is divided by 6 h because it was computed for 6-h forecasts but it is applied every time step. The skill of each forecast is measured by the anomaly correlation (AC), given at time t by

 
formula

where ϕs is the latitude of grid point s and Ngp is the number of degrees of freedom in the model state vector. The AC is essentially the inner product of the forecast anomaly and the reanalysis anomaly, with each gridpoint contribution weighted by the cosine of its latitude and normalized so that a perfect forecast has an AC of 1. It is common to consider that the forecast remains useful if AC > 0.6 (Kalnay 2003, p. 27).

Figure 4 illustrates the success of the bias correction for the QG model. Both the a posteriori and the online correction of the bias significantly increase the forecast skill. However, the improvement obtained with the online correction is larger than that obtained with the a posteriori correction, indicating that the correction made during the model integration reduces the model error growth. Applying the bias correction every 6 h for a single time step gave slightly worse results than applying it every time step.

Fig. 4.

QG model forecasts, verified against the 1991–2000 NCEP reanalysis, remain useful (AC > 0.6) for approximately 2 days. When the same forecast is postprocessed to remove the bias fields 〈δxa6〉, 〈δxa12〉, . . . , 〈δxa48〉, the forecasts remain useful for 26% (12 h) longer. However, when the online corrected (debiased) QG model is used to generate the forecasts, they remain useful for 38% (18 h) longer.

Fig. 4.

QG model forecasts, verified against the 1991–2000 NCEP reanalysis, remain useful (AC > 0.6) for approximately 2 days. When the same forecast is postprocessed to remove the bias fields 〈δxa6〉, 〈δxa12〉, . . . , 〈δxa48〉, the forecasts remain useful for 26% (12 h) longer. However, when the online corrected (debiased) QG model is used to generate the forecasts, they remain useful for 38% (18 h) longer.

Similar results were obtained for the SPEEDY model and are presented for the 500-hPa zonal wind, temperature, and geopotential height in Fig. 5 (top row) for the month of November 1987. To show the vertical and monthly dependence of the correction, the time of crossing of AC = 0.6 is plotted for three vertical levels for the control (second row) and online corrected (debiased) SPEEDY forecasts (third row) as a function of the month. The bottom row presents the relative improvement. For the wind, the debiasing leads to an increase in the length of useful skill of over 60% at 850 hPa (where the errors are largest), about 50% at 500 hPa, and about 10% at 200 hPa, where the errors are smallest. For the temperature, where the skill is less dependent on pressure level, the improvements are between 20% and 40% at all levels. There is not much dependence on the annual cycle, possibly because the verification is global.

Fig. 5.

(top) Average November 1987 AC of biased, postprocessed, and debiased SPEEDY forecasts at 500 hPa. Online bias correction is slightly more effective than postprocessing the biased forecast. (bottom) Relative improvement (Ib/Ia) in crossing time of AC = 0.6 at three different levels (solid = 200, dashed = 500, dash–dot = 850 hPa) vs month. SPEEDY forecasts are typically more useful at upper levels (see middle); improvements are more evident at lower levels and higher latitudes (not shown). For example, biased forecasts of Z at 850 hPa are typically useful for 20 h in April; debiased model forecasts are useful for 36 h.

Fig. 5.

(top) Average November 1987 AC of biased, postprocessed, and debiased SPEEDY forecasts at 500 hPa. Online bias correction is slightly more effective than postprocessing the biased forecast. (bottom) Relative improvement (Ib/Ia) in crossing time of AC = 0.6 at three different levels (solid = 200, dashed = 500, dash–dot = 850 hPa) vs month. SPEEDY forecasts are typically more useful at upper levels (see middle); improvements are more evident at lower levels and higher latitudes (not shown). For example, biased forecasts of Z at 850 hPa are typically useful for 20 h in April; debiased model forecasts are useful for 36 h.

As in the QG model, a bias correction made during the model integration is more effective than a bias correction performed a posteriori, although they both result in significant improvements. This is important because it indicates that the model deficiencies do not simply add errors; external errors are amplified by internal error growth. Further iteration of the procedure does not improve model forecasts. That is, finding the mean 6-h forecast error in the debiased model M+ (10) and correcting the forcing again does not extend the usefulness of forecasts.

The positive impact of the interactive correction is also indicated by an essentially negligible mean error in the debiased QG model (not shown). The correction of SPEEDY by 〈δxa6〉 removes the large polar errors from the mean error fields, but some of the subpolar features remain with smaller amplitudes (cf. Fig. 6 with Fig. 3). This suggests that a nonlinear correction to the SPEEDY model forcing may be more effective.

Fig. 6.

Mean 6-h analysis correction 〈δxa6〉 in debiased SPEEDY model forecasts of (top left) u (m s−1), (top right) T (K), and (bottom) Q (g kg−1) during January from 1982 to 1986. The debiased SPEEDY model exhibits significantly less bias in 6-h forecasts of the dependent sample, especially in polar regions (cf. with Fig. 3).

Fig. 6.

Mean 6-h analysis correction 〈δxa6〉 in debiased SPEEDY model forecasts of (top left) u (m s−1), (top right) T (K), and (bottom) Q (g kg−1) during January from 1982 to 1986. The debiased SPEEDY model exhibits significantly less bias in 6-h forecasts of the dependent sample, especially in polar regions (cf. with Fig. 3).

b. Error growth

Dalcher and Kalnay (1987) and Reynolds et al. (1994) parameterized the growth rates of internal and external error with an extension of the logistic equation, namely,

 
formula

where α is the variance of the error anomalies, β is the growth rate of error anomalies due to instabilities (internal), and δ is the growth rate due to model deficiencies (external). These error growth rate parameters may be estimated from the AC for the control and debiased model forecasts. The 500-hPa November 1987 estimates of these growth rates (Table 1) demonstrate significant reduction in the external error growth rate resulting from online state-independent error correction. The only exception is the moisture, suggesting that the correction of the moisture bias conflicts with the parameterization tuned to the control model bias. As could be expected, bias correction changes the internal error growth rate much less than the external rate.

Table 1.

Error growth rate parameters β and δ for the logistic error growth model (12), estimated from the time-average 500-hPa November 1987 AC for control and debiased model forecasts. State-independent online correction significantly reduces the component of the error growth resulting from model deficiencies.

Error growth rate parameters β and δ for the logistic error growth model (12), estimated from the time-average 500-hPa November 1987 AC for control and debiased model forecasts. State-independent online correction significantly reduces the component of the error growth resulting from model deficiencies.
Error growth rate parameters β and δ for the logistic error growth model (12), estimated from the time-average 500-hPa November 1987 AC for control and debiased model forecasts. State-independent online correction significantly reduces the component of the error growth resulting from model deficiencies.

c. Diurnal bias correction

In addition to the time-averaged analysis corrections, the leading EOFs of the anomalous analysis corrections are computed to identify the time-varying component. The spatial covariance of these corrections over the dependent sample (recomputed using the debiased model M+) is given by ≡ 〈δxa6δxa6T〉. The two leading eigenvectors of identify patterns of diurnal variability that are poorly represented by the model (see Fig. 7, top row). Since SPEEDY solar forcing is constant over each 24-h period, it fails to resolve diurnal changes in forcing due to the rotation of the earth. Consequently, SPEEDY underestimates (overestimates) the near-surface daytime (nighttime) temperatures. This trend is most evident over land in the Tropics and summer hemisphere.

Fig. 7.

(top row) The temperature component of the (left) first and (right) second eigenvectors of at the lowest level of the model (sigma level 0.95). The temperature component of δxa6′ is projected onto EOFs 1 and 2 for (middle row) January 1983. (bottom row) Generating with diurnally corrected January 1987 forecasts, a reduction of the amplitude in EOFs 1 and 2 is seen.

Fig. 7.

(top row) The temperature component of the (left) first and (right) second eigenvectors of at the lowest level of the model (sigma level 0.95). The temperature component of δxa6′ is projected onto EOFs 1 and 2 for (middle row) January 1983. (bottom row) Generating with diurnally corrected January 1987 forecasts, a reduction of the amplitude in EOFs 1 and 2 is seen.

The time-dependent amplitude of the leading modes can be estimated by projecting the leading eigenvectors of onto δxa6′(t) over the dependent sample. As expected from the wavenumber-1 structure of the EOFs, the signals are out of phase by 6 h (see Fig. 7, middle row). An estimate of time dependence of the diurnal component of the error is generated by averaging the projection over the daily cycle for the years 1982–86. A diurnal correction of the seasonally debiased model M+ is then computed online by linearly interpolating EOFs 1 and 2 as a function of the time of day. The diurnally corrected model is denoted M++. Correction of the debiased SPEEDY forcing to include this diurnal component reduced the 6-h temperature forecast errors for the independent sample (1987), most notably over land (see Fig. 7, bottom row). Although more sophisticated GCMs include diurnal forcings, it is still common for their forecast errors to have a significant time-dependent signal (Marshall et al. 2003). This signal can be estimated and corrected as has been done here.

5. State-dependent correction

a. Leith’s empirical correction operator

The time series of anomalous analysis corrections provides a residual estimate of the linear state-dependent model error. Leith (1978) suggested that these corrections could be used to form a state-dependent correction. Leith sought an improved model of the form

 
formula

where 𝗟x′ is the state-dependent error correction. The tendency error of the improved model is given by

 
formula

where t is the instantaneous time derivative of the reanalysis state. The mean square tendency error of the improved model is given by 〈gTg〉. Minimizing this tendency error with respect to 𝗟, Leith’s state-dependent correction operator is given by

 
formula

where t is approximated with finite differences by

 
formula

and Δt = 6 h for the reanalysis. Note that the term tM++(xt) can then be estimated at time t using only the analysis corrections, namely,

 
formula

This method of approximating tM++(xt) is attractive because the analysis corrections of an operational model are typically generated during preimplementation testing. As a result, the operator 𝗟 may be estimated with no additional model integrations.

To estimate 𝗟, we first recompute the time series of residuals δxa6′(t) using the online debiased and diurnally corrected model M++. The cross covariance (Bretherton et al. 1992) of the analysis corrections with their corresponding reanalysis states is given by ≡ 〈δxa6xtT〉, the lagged cross covariance is given by ≡ 〈δxa6′(t)xtT(t − 1)〉, and the reanalysis state covariance is given by 𝗖xtxt ≡ 〈xtxtT〉. The covariances can be computed offline separately on time series pairs δxa6′ and xt′ corresponding to each month so that each month has its own covariance matrices. In computing the covariance matrices, we found that weighting each grid point by the cosine of latitude made little difference, a result consistent with Wallace et al. (1992).

The finite-difference approximation of tM++(xt) given by (17) results in an estimate of 𝗟 in terms of the covariance matrices and 𝗖xtxt. The empirical correction operator is given by

 
formula

Note that w = 𝗖xtxt−1; x′ is the anomalous state normalized by its empirically derived covariance; 𝗟x′ = ; and w is the best estimate of the anomalous analysis correction corresponding to the anomalous model state x′ over the dependent sample. Assuming that sampling errors are small and that the external forecast error evolves linearly with respect to lead time, this correction should improve the forecast model M++. Small internal forecast errors grow exponentially with lead time, but those forced by model error tend to grow linearly (e.g., Dalcher and Kalnay 1987; Reynolds et al. 1994). Therefore, the Leith operator should provide a useful estimate of the state-dependent model error.

Using a model with very few degrees of freedom and errors designed to be strongly state dependent, DelSole and Hou (1999) found that the Leith operator was successful in correcting state-dependent errors relative to a nature run. However, direct computation of 𝗟x′ requires O(N3gp) floating point operations (flops) every time step. For the global QG model, Ngp = O(104), for the SPEEDY model, Ngp = O(105), and for operational models Ngp = O(107). It is clear that this operation would be prohibitive. Approaches to reduce the dimensionality of the Leith correction are now described.

b. Covariance localization

Covariance matrices and 𝗖xtxt may be computed offline using the dependent sample. To make the computation more feasible, correlations between different anomalous dynamical variables at the same level are ignored, for example, u and T at sigma level 0.510 in SPEEDY. Correlations between identical anomalous dynamical variables at different levels, for example, q at 800 and 500 hPa in QG, are ignored as well. Miyoshi (2005) found these correlations to be significantly smaller than those between identical variables at the same level in the SPEEDY model. The assumption of univariate and unilevel covariances could be removed in an operational implementation by combining geostrophically balanced variables into a single variable before computing covariances, as is usually done in variational data assimilation (Parrish and Derber 1992). To further simplify evaluation of the procedure, we consider only covariance at identical levels for the variables u, υ, and T; covariance in Q and log(ps) are ignored. In doing so, a block diagonal structure is introduced to and 𝗖xtxt, with each block corresponding to the covariance of a single variable at a single level.

A localization constraint is also imposed on the covariance matrices by setting to zero all covariance elements corresponding to grid points farther than 3000 km away from each other. In an infinite dependent sample, these covariance elements would be approximately zero. This constraint imposes a sparse, banded structure on each block in and 𝗖xtxt. Together, the two constraints significantly reduce the flops required to compute 𝗟x′. Another advantage of the reduced operator is that it is less sensitive to sampling errors related to the length of the reanalysis time series. Figure 8 illustrates the variance explained by the first few SVD modes of the dense and sparse correction operators corresponding to the January zonal wind at sigma level 0.2. The localization constraint is imposed on the covariance block corresponding to u at sigma level 0.2 in January for both and 𝗖xtxt before SVD of 𝗟 = 𝗖xtxt−1. The explained variance is given by

 
formula

where σi is the ith singular value and the univariate covariance block is Nu × Nu. It is useful in determining how many modes may be truncated in approximating the correction operator 𝗟. To explain 90% of the variance, more than 400 modes of the dense correction operator are required whereas only 40 are required of the sparse operator. Covariance localization has the effect of concentrating the physically important correlations into the leading modes.

Fig. 8.

Explained variance as a function of the number of SVD modes of the dense and sparse Leith correction operators. SVD is performed on the univariate covariance block corresponding to zonal wind at sigma level 0.2. The sparse constraints imposed on the empirical correction operator concentrate more of the variance into the dominant modes of the spectrum.

Fig. 8.

Explained variance as a function of the number of SVD modes of the dense and sparse Leith correction operators. SVD is performed on the univariate covariance block corresponding to zonal wind at sigma level 0.2. The sparse constraints imposed on the empirical correction operator concentrate more of the variance into the dominant modes of the spectrum.

To test Leith’s empirical correction procedure, several 5-day forecasts similar to those described earlier are performed. The initial conditions are taken from a sample independent of that which was used to estimate the correction operator 𝗟. The first forecast is made with the online state-independent corrected model M++. A second forecast is made using the state-dependent error-corrected model (13). Forecasts corrected online by the dense (univariate covariance) operator 𝗟 performed approximately 10% worse (and took approximately 100 times longer to generate) than those corrected by the sparse operator, indicating the problems of sampling without localization. Even when using the sparse operator, the generation of forecasts corrected online still took a prohibitively long time, and only improved forecasts by 1 h. This indicates that despite attempts to reduce the dimensionality of the correction operator, the sparse correction still requires too many flops to be useful with an operational model. A further reduction of the degrees of freedom is described below, using only the relevant structure of the correction operator.

c. Low-dimensional approximation

An alternative formulation of Leith’s correction operator is introduced here, based on the correlation of the leading SVD modes. The dependent sample of anomalous analysis corrections and model forecasts are normalized at each grid point by their standard deviation so that they have unit variance; they are denoted and . They are then used to compute the cross covariance, given by ≡ 〈T〉; normalization is required to make a correlation matrix. The matrix is then restricted to the same univariate covariance localization as previously described. The cross covariance is then decomposed to identify pairs of spatial patterns that explain as much of possible of the mean-squared temporal covariance between the fields and . The SVD is given by

 
formula

where the columns of the orthonormal matrices 𝗨 and 𝗩 are the left and right singular vectors uk and vk. Here Σ is a diagonal matrix containing singular values σk whose magnitude decreases with increasing k. The leading patterns u1 and v1 associated with the largest singular value σ1 are the dominant coupled signals in the time series and , respectively (Bretherton et al. 1992). Patterns uk and vk represent the kth most significant coupled signals. Expansion coefficients or principal components (PCs) ak(t), bk(t) are obtained by projecting the coupled signals uk, vk onto and as follows:

 
formula

PCs describe the magnitude and time dependence of the projection of the coupled signals onto the reference time series.

The heterogeneous correlation maps indicate how well the dependent sample of normalized anomalous analysis corrections can be predicted from the principal components bk (derived from the normalized forecast state anomalies ). It is computed by

 
formula

This map is the vector of correlation coefficients between the gridpoint values of the normalized anomalous analysis corrections and the kth expansion coefficient of , namely bk. The SPEEDY heterogeneous correlation maps (Fig. 9) corresponding to the three leading coupled SVD modes between the normalized anomalous analysis corrections and model states illustrate a significant relationship between the structure of the 6-h forecast error and the model state, at least for the dependent sample. Locally, the time correlation reaches values of 60%–80%, but the global average is still small.

Fig. 9.

The SVD of 𝗖δxa6xa6 identifies coupled signals between the analysis corrections (shades) and model states (contours) in winds u and υ, and temperature T at sigma level (left) 0.95 and (right) 0.2 for January 1982–86. The dominant three signals in the model state time series , namely, v1, v2, v3, are plotted in contours. The corresponding signals in the analysis corrections , namely, u1, u2, and u3, are used to generate the heterogeneous correlation maps ρ1, ρ2, and ρ3 [see (22)] plotted in shades. The three signals are superimposed for graphic simplicity, since they do not overlap. Large local correlations are indicative of persistent patterns whose magnitude and/or physical location are consistently misrepresented by SPEEDY. For example, coupled signal 1 in υ at sigma level 0.2 indicates that patterns of the shape v1 should be farther east. Coupled signal 3 for the same variable suggests strengthening anomalies of the shape v3. Coupled signal 2 in u at sigma level 0.95 suggests weakening anomalies of the shape v2.

Fig. 9.

The SVD of 𝗖δxa6xa6 identifies coupled signals between the analysis corrections (shades) and model states (contours) in winds u and υ, and temperature T at sigma level (left) 0.95 and (right) 0.2 for January 1982–86. The dominant three signals in the model state time series , namely, v1, v2, v3, are plotted in contours. The corresponding signals in the analysis corrections , namely, u1, u2, and u3, are used to generate the heterogeneous correlation maps ρ1, ρ2, and ρ3 [see (22)] plotted in shades. The three signals are superimposed for graphic simplicity, since they do not overlap. Large local correlations are indicative of persistent patterns whose magnitude and/or physical location are consistently misrepresented by SPEEDY. For example, coupled signal 1 in υ at sigma level 0.2 indicates that patterns of the shape v1 should be farther east. Coupled signal 3 for the same variable suggests strengthening anomalies of the shape v3. Coupled signal 2 in u at sigma level 0.95 suggests weakening anomalies of the shape v2.

d. Low-dimensional correction

The most significant computational expense required by Leith’s empirical correction involves solving the Ngp-dimensional linear system 𝗖xtxtw(T) = x′(T) for w at each time T during a forecast integration. Taking advantage of the cross covariance SVD and assuming that and 𝗖xtxt, a reduction in computation for this operation, may be achieved by expressing w = 𝗖xfxf−1x′ as a linear combination of the orthonormal right singular vectors vk. The assumptions are reasonable since we are attempting to estimate the tendency error at time T, not T + 6 h. The empirical correction operator is given by

 
formula

where for K < Ngp, only the component of w in the K-dimensional space spanned by the right singular vectors vk can contribute to this empirical correction. This dependence can be exploited as follows.

Assume the model state at time T during a forecast integration is given by x(T). The normalized state anomaly is given by x′(T) = x(T) − 〈xt〉, normalized at state vector element s by the standard deviation of xf6′ over the dependent sample. The component of explained by the signal vk may then be estimated by computing the new expansion coefficient (PC) bk(T) = vTk · . The right PC covariance over the dependent sample is given by 𝗖bb = 〈bbT〉, calculated using bk from (21). Because of the orthogonality of the right singular vectors vk, assuming an infinite sample, PCs bk and bj are uncorrelated for kj. As a result, we restrict the finite sample covariance 𝗖bb to be diagonal. The linear system

 
formula

may then be solved for γ at time T. The cost of solving (24) is O(K) where K is the number of SVD modes retained, as opposed to the O(N2gp) linear system required by Leith’s full-dimensional Leith empirical correction. The solution of (24) gives an approximation of w(T), namely,

 
formula
 
formula
 
formula

where (T) is generated by solving the linear system (24) in the space of the leading K singular vectors, while w(T) requires solving the Ngp-dimensional linear system (25) in the space of the model grid. Writing uck for the error signal uk weighted at state vector element s by the standard deviation of δxa6 over the dependent sample, the kth component of the state-dependent error correction at time T is given by

 
formula

where σk is the coupling strength over the dependent sample, and the weight γk(T) assigned to correction signal uck indicates the sign and magnitude of the correction that may amplify, dampen, or shift the flow anomaly local to the pattern uck. Then the low-dimensionally corrected model is given at time T by

 
formula

so that during forecasts, a few (K) dominant model state signals vk can be projected onto the anomalous, normalized model state vector. The resulting sum ΣKk=1zk is the best representation of the original analysis correction anomalies δxa6′ in terms of the current forecast state x(T). If the correlation between the normalized state anomaly ′(T) and the local pattern vk is small, the new expansion coefficient bk(T) will be negligible, no correction by uck will be made at time T, and therefore no harm will be done to the forecast. This fact is particularly important with respect to predicting behavior that may vary on a time scale longer than the training period, for example, El Niño–Southern Oscillation (ENSO) events (Barnston et al. 1999).

A pair of examples of the correction procedure are shown in Fig. 10. SVD mode k = 2 in T(K) at sigma level 0.95 (top left) suggests that warm anomalies over the western Pacific are typically too warm. Mode k = 3 in u (m s−1) at sigma level 0.2 (top right) suggests that fronts of the shape v3 over the eastern Pacific should be farther northeast. Online low-dimensional state-dependent correction improves the local RMS error by 21% in 6-h forecasts of T (left) and 14% in u (right). Retaining K = 10 modes of the SVD, state-dependent correction by (29) of both the QG and SPEEDY models improved forecasts by a few hours. This indicates that only a small component of the error can be predicted given the model state over the independent sample (1987). The low-dimensional correction outperformed the sparse Leith operator (Table 2) indicating that the SVD truncation reduces spurious correlations unaffected by the covariance localization. Correction by K = 5 and K = 20 modes of the SVD were slightly less successful. Heterogeneous correlation maps for modes K > 20 did not exceed 60% for the dependent sample. The corrections are more significant in regions where ρ is large and at times in which the state anomaly projects strongly on the leading SVD modes (see examples in Fig. 10). The global averaged improvement is small since the state-dependent corrections are local in space and time. Nevertheless, given that the computational expense of the low-dimensional correction is orders of magnitude smaller than that of even the sparse correction operator, and the results are better, it seems to be a promising approach to generating state-dependent corrections.

Fig. 10.

(top) Coupled signals uck (shades) and vk (contours) between SPEEDY forecast errors and states. SVD mode k = 2 in T(K) at (left) sigma level 0.95 suggests that warm anomalies over the western Pacific are typically too warm. Mode k = 3 in u (m s−1) at (right) sigma level 0.2 suggests that fronts of the shape v3 over the eastern Pacific should be farther northeast. (middle) Six-hour forecast generated by the online state-independent corrected model M++ (contours) and analysis correction (shades) in (left) T(K) at sigma level 0.95 for 30 Jan 1987 and (right) u (m s−1) at sigma level 0.2 for 18 Jan 1987. (bottom) Online low-dimensional state-dependent correction improves the local RMS error by (left) 21% in T and (right) 14% in u.

Fig. 10.

(top) Coupled signals uck (shades) and vk (contours) between SPEEDY forecast errors and states. SVD mode k = 2 in T(K) at (left) sigma level 0.95 suggests that warm anomalies over the western Pacific are typically too warm. Mode k = 3 in u (m s−1) at (right) sigma level 0.2 suggests that fronts of the shape v3 over the eastern Pacific should be farther northeast. (middle) Six-hour forecast generated by the online state-independent corrected model M++ (contours) and analysis correction (shades) in (left) T(K) at sigma level 0.95 for 30 Jan 1987 and (right) u (m s−1) at sigma level 0.2 for 18 Jan 1987. (bottom) Online low-dimensional state-dependent correction improves the local RMS error by (left) 21% in T and (right) 14% in u.

Table 2.

Comparison of Leith’s dense correction operator with its corresponding sparse and low-dimensional approximations, including the number of flops needed to generate the state-dependent correction per time step. Numbers are time-averaged improvements in crossing time of AC = 0.6 for daily 5-day 500-hPa geopotential height forecasts made with model (29) during January 1987, measured against the crossing time observed in forecasts made by the online state-independent corrected SPEEDY model M++. Univariate covariances were used to calculate the dense Leith operator so that it may be applied block by block.

Comparison of Leith’s dense correction operator with its corresponding sparse and low-dimensional approximations, including the number of flops needed to generate the state-dependent correction per time step. Numbers are time-averaged improvements in crossing time of AC = 0.6 for daily 5-day 500-hPa geopotential height forecasts made with model (29) during January 1987, measured against the crossing time observed in forecasts made by the online state-independent corrected SPEEDY model M++. Univariate covariances were used to calculate the dense Leith operator so that it may be applied block by block.
Comparison of Leith’s dense correction operator with its corresponding sparse and low-dimensional approximations, including the number of flops needed to generate the state-dependent correction per time step. Numbers are time-averaged improvements in crossing time of AC = 0.6 for daily 5-day 500-hPa geopotential height forecasts made with model (29) during January 1987, measured against the crossing time observed in forecasts made by the online state-independent corrected SPEEDY model M++. Univariate covariances were used to calculate the dense Leith operator so that it may be applied block by block.

The low-dimensional representation of the error is advantageous compared to Leith’s correction operator for several reasons. First, it reduces the sampling errors that have persisted despite covariance localization by identifying the most robust coupled signals between the analysis correction and forecast state anomalies. Second, the added computation is trivial; it requires solving a K-dimensional linear system and computing K inner products for each variable at each level. Finally, the SVD signals identified by the technique can be used by modelers to isolate flow-dependent model deficiencies. In ranking these signals by strength, SVD gives modelers the ability to evaluate the relative importance of various model errors.

Covariance localization (which led to better results when using the Leith operator) is validated by comparing the signals uk and vk obtained from the SVD of the sparse and dense versions of . The most significant structures in the dominant patterns (e.g., ul, v1) of the sparse covariance matrix are very similar to those obtained from the dense version. However, the dominant patterns in the dense covariance matrix also contain spurious, small-amplitude noisy structures related to the nonphysical, nonzero covariance between distant grid points. Given a long enough reanalysis time series, this structure would disappear. The structures identified in the sparse covariance matrix are thus good approximations of the physically meaningful structures of the dense covariance matrix. Qualitatively similar structures (e.g., mean, variance, EOFs) were observed when training was limited to just one year, suggesting that an operational implementation of this method should not require several years of training.

Since SVD is performed independently on univariate blocks of , patterns u1 in u and u1 in υ are not necessarily in geostrophic balance. Nevertheless, SPEEDY variables presumably remain in approximate geostrophic balance after state-dependent correction because there exist patterns uj in u and uk in υ that are in geostrophic balance with u1 in u and u1 in υ respectively. Also, the large spatial extent of the covariance localization produces SVD patterns that are synoptically balanced.

6. Summary and discussion

This paper considers the estimation and correction of state-independent (seasonal and diurnal) and state-dependent model errors of a pair of simple GCMs. The two approaches used to create the time series of analysis corrections and model states needed for training, (direct insertion and nudging toward a reanalysis used as an estimate of the true atmosphere) are simple; they require essentially a single long model integration and give similar estimates of the bias. In an operational setting, time series of model states and analysis increments are already available from preimplementation testing.

Although the procedure is inspired by Leith (1978) and DelSole and Hou (1999), it is tested here using realistic models, and using as nature a reanalysis under the assumption that it is much closer to the real atmosphere than any model. The online state-independent correction, including the EOFs associated with the diurnal cycle, resulted in a significant improvement in the forecast skill (as measured by the AC). Unlike Saha (1992), this improvement was larger than that obtained by a posteriori corrections of the bias, indicating the importance of correcting the error during the integration. The results are also significantly different from those of DelSole and Hou (1999), who obtained a very small improvement from the state-independent correction, and a very large improvement from the state-dependent correction using Leith’s formulation. Their results were probably optimistic in that the model errors were by construction very strongly state dependent. The results presented here, found using global atmospheric models and comparing with a reanalysis of the real atmosphere, are probably more realistic with respect to the relative importance of mean and state-dependent corrections. Nevertheless, our results are probably optimistic since the improvement of the debiased model M+ (10) relative to the biased model M (3) is larger for the simple GCMs tested here than could be expected in an operational model. It is not clear how large the analysis correction and forecast state coupled signal size would be for more sophisticated models, but operational evidence suggests that state dependent errors are not negligible.

It was necessary to introduce a horizontal and vertical localization of the components of Leith’s empirical correction operator to reduce sampling problems. Multilevel and multivariate covariances were ignored to make the computation practical. The assumptions underlying the localization require model-dependent empirical verification; implementation on a more realistic model may require that the localization be multivariate in order to have a balanced correction. The Leith–DelSole–Hou method with the original dense covariance matrix makes forecasts worse. With the sparse covariance, however, there is an improvement of about 1 h, still at a large computational cost.

A new method of state-dependent error correction was introduced, based on SVD of coupled analysis correction and forecast state anomalies. The cross covariance is the same as that which appears in Leith’s formulation, but it would be prohibitive to compute it using an operational model. The new method, based on using the SVD heterogeneous correlation maps as the basis of linear regression, doubles the improvement and is many orders of magnitude faster. The method can be applied at a rather low cost, both in the training and in the correction phases, and yields significant forecast improvements, at least for the simple but realistic global QG and SPEEDY models. It could be applied with low computational cost and minimal sampling problems to data assimilation and ensemble prediction, applications where accounting for model errors has been found to be important. The method may be particularly useful for forecasting of severe weather events where a posteriori bias correction will typically weaken anomalies. The patterns identified by SVD could also be used to identify sources of model deficiencies and thereby guide future model improvements.

A disadvantage of empirical correction is that operational model upgrades may require fresh computation of the dominant correction and state anomaly signals. However, analysis increments generated during preimplementation tests of an operational model can be used as a dependent sample to estimate the model error and state anomaly covariance. With such a collection of past data, it may not be necessary to run an operational model in order to generate the necessary sample.

Flow-dependent estimates of model error are of particular interest to the community attempting to develop an efficient ensemble Kalman filter for data assimilation (Bishop et al. 2001; Whitaker and Hamill 2002; Ott et al. 2004; Hunt et al. 2004). Naive data assimilation procedures assume the model error to be constant, and represent its effect by adding random noise to each ensemble member. More sophisticated procedures add a random selection of observed model tendencies to each ensemble member, or artificially increase the background forecast uncertainty through variance inflation. Other state-dependent procedures increase the forecast uncertainty in local contracting directions of state space (Danforth and Yorke 2006). The SVD technique described in this paper can be combined with an ensemble-based data assimilation scheme to provide time- and state-dependent estimates of the model error, for example, in the local ensemble transform Kalman filter (LETKF) being developed by the chaos group at the University of Maryland (Ott et al. 2004; Hunt et al. 2004). The empirical correction method described here involves local computations commensurate with the treatment of covariance localization in the LETKF. In a data assimilation implementation, the SVD method would involve appending a K-dimensional estimate of the model error to each ensemble member.

Acknowledgments

This research contributes to the lead author’s Ph.D. dissertation. The authors thank T. DelSole, J. Ballabrera, R. F. Cahalan, Z. Toth, S. Saha, J. Derber, and the anonymous reviewers for insightful comments and suggestions. The authors also thank F. Molteni and F. Kucharski for kindly providing the QG and SPEEDY models, and E. Kostelich for his essential guidance on their initial implementation. This research was supported by NOAA THORPEX Grant NOAA/NA040AR4310103 and NASA-ESSIC Grant 5-26058.

REFERENCES

REFERENCES
Anderson
,
J.
,
2001
:
An ensemble adjustment Kalman filter for data assimilation.
Mon. Wea. Rev.
,
129
,
2884
2903
.
Barnston
,
A.
,
M.
Glantz
, and
Y.
He
,
1999
:
Predictive skill of statistical and dynamical climate models in SST forecasts during the 1997–98 El Niño episode and the 1998 La Niña onset.
Bull. Amer. Meteor. Soc.
,
80
,
217
243
.
Bishop
,
C.
,
B.
Etherton
, and
S.
Majumdar
,
2001
:
Adaptive sampling with the ensemble transform Kalman filter. Part I: The theoretical aspects.
Mon. Wea. Rev.
,
129
,
420
436
.
Bretherton
,
C.
,
C.
Smith
, and
J.
Wallace
,
1992
:
An intercomparison of methods for finding coupled patterns in climate data.
J. Climate
,
5
,
541
560
.
Carter
,
G.
,
J.
Dallavalle
, and
H.
Glahn
,
1989
:
Statistical forecasts based on the National Meteorological Center’s numerical weather prediction system.
Wea. Forecasting
,
4
,
401
412
.
Dalcher
,
A.
, and
E.
Kalnay
,
1987
:
Error growth and predictability in operational ECMWF forecasts.
Tellus
,
39
,
474
491
.
D’Andrea
,
F.
, and
R.
Vautard
,
2000
:
Reducing systematic errors by empirically correcting model errors.
Tellus
,
52A
,
21
41
.
Danforth
,
C. M.
, and
J. A.
Yorke
,
2006
:
Making forecasts for chaotic physical processes.
Phys. Rev. Lett.
,
96
.
144102, doi:10.1103/PhysRevLett.96.144102
.
DelSole
,
T.
, and
A. Y.
Hou
,
1999
:
Empirical correction of a dynamical model. Part I: Fundamental issues.
Mon. Wea. Rev.
,
127
,
2533
2545
.
Faller
,
A.
, and
C.
Schemm
,
1977
:
Statistical corrections to numerical prediction equations. II.
Mon. Wea. Rev.
,
105
,
37
56
.
Ferranti
,
L.
,
E.
Klinker
,
A.
Hollingsworth
, and
B.
Hoskins
,
2002
:
Diagnosis of systematic forecast errors dependent on flow anomalies.
Quart. J. Roy. Meteor. Soc.
,
128
,
1623
1640
.
Glahn
,
H.
, and
D.
Lowry
,
1972
:
The use of model output statistics in objective weather forecasting.
J. Appl. Meteor.
,
11
,
1203
1211
.
Golub
,
G.
, and
C.
Van Loan
,
1996
:
Matrix Computations.
Johns Hopkins University Press, 694 pp
.
Hamill
,
T.
, and
C.
Snyder
,
2000
:
A hybrid ensemble Kalman filter–3D variational analysis scheme.
Mon. Wea. Rev.
,
128
,
2905
2919
.
Hoke
,
J. E.
, and
R. A.
Anthes
,
1976
:
The initialization of numerical models by a dynamic initialization technique.
Mon. Wea. Rev.
,
104
,
1551
1556
.
Houtekamer
,
P.
, and
H.
Mitchell
,
2001
:
A sequential ensemble Kalman filter for atmospheric data assimilation.
Mon. Wea. Rev.
,
129
,
796
811
.
Hunt
,
B.
, and
Coauthors
,
2004
:
Four-dimensional ensemble Kalman filtering.
Tellus
,
56
,
273
277
.
Kaas
,
E.
,
A.
Guldberg
,
W.
May
, and
M.
Deque
,
1999
:
Using tendency errors to tune parameterization of unresolved dynamical scale interactions in atmospheric general circulation models.
Tellus
,
51
,
612
629
.
Kalnay
,
E.
,
2003
:
Atmospheric Modelling, Data Assimilation and Predictability.
Cambridge University Press, 341 pp
.
Klinker
,
E.
, and
P.
Sardeshmukh
,
1992
:
The diagnosis of mechanical dissipation in the atmosphere from large-scale balance requirements.
J. Atmos. Sci.
,
49
,
608
627
.
Leith
,
C. E.
,
1978
:
Objective methods for weather prediction.
Annu. Rev. Fluid Mech.
,
10
,
107
128
.
Leith
,
C. E.
,
1991
:
Data assimilation in meteorology and oceanography. Advances in Geophysics, Vol. 33, Academic Press, 141–266
.
Marshall
,
C. H.
,
K. C.
Crawford
,
K. E.
Mitchell
, and
D. J.
Stensrud
,
2003
:
The impact of the land surface physics in the operational NCEP Eta model on simulating the diurnal cycle: Evaluation and testing using Oklahoma Mesonet data.
Wea. Forecasting
,
18
,
748
768
.
Marshall
,
J.
, and
F.
Molteni
,
1993
:
Toward a dynamical understanding of planetary-scale flow regimes.
J. Atmos. Sci.
,
50
,
1792
1818
.
Miyoshi
,
T.
,
2005
:
Ensemble Kalman filter experiments with a primitive-equation global model. Ph.D. dissertation, University of Maryland, College Park, 226 pp
.
Molteni
,
F.
,
2003
:
Atmospheric simulations using a GCM with simplified physical parametrizations. I: Model climatology and variability in multi-decadal experiments.
Climate Dyn.
,
20
,
175
191
.
Ott
,
E.
, and
Coauthors
,
2004
:
A local ensemble Kalman filter for atmospheric data assimilation.
Tellus
,
56
,
415
428
.
Parrish
,
D.
, and
J.
Derber
,
1992
:
The National Meteorological Center’s spectral statistical-interpolation analysis system.
Mon. Wea. Rev.
,
120
,
1747
1763
.
Renwick
,
J.
, and
J.
Wallace
,
1995
:
Predictable anomaly patterns and the forecast skill of Northern Hemisphere wintertime 500-mb height fields.
Mon. Wea. Rev.
,
123
,
2114
2131
.
Reynolds
,
C.
,
P. J.
Webster
, and
E.
Kalnay
,
1994
:
Random error growth in NMC’s global forecasts.
Mon. Wea. Rev.
,
122
,
1281
1305
.
Saha
,
S.
,
1992
:
Response of the NMC MRF model to systematic error correction within integration.
Mon. Wea. Rev.
,
120
,
345
360
.
Schemm
,
C.
,
D.
Unger
, and
A.
Faller
,
1981
:
Statistical corrections to numerical predictions III.
Mon. Wea. Rev.
,
109
,
96
109
.
Schemm
,
J-K.
, and
A.
Faller
,
1986
:
Statistical corrections to numerical predictions. Part IV.
Mon. Wea. Rev.
,
114
,
2402
2417
.
Wallace
,
J.
,
C.
Smith
, and
C.
Bretherton
,
1992
:
Singular value decomposition of wintertime sea surface temperature and 500-mb height anomalies.
J. Climate
,
5
,
561
576
.
Whitaker
,
J.
, and
T.
Hamill
,
2002
:
Ensemble data assimilation without perturbed observations.
Mon. Wea. Rev.
,
130
,
1913
1924
.

Footnotes

Corresponding author address: Christopher M. Danforth, Department of Mathematics, University of Maryland, College Park, College Park, MD 20742. Email: danforth@math.umd.edu