The purpose of the present study is to explore the feasibility of estimating and correcting systematic model errors using a simple and efficient procedure, inspired by papers by Leith as well as DelSole and Hou, that could be applied operationally, and to compare the impact of correcting the model integration with statistical corrections performed a posteriori. An elementary data assimilation scheme (Newtonian relaxation) is used to compare two simple but realistic global models, one quasigeostrophic and one based on the primitive equations, to the NCEP reanalysis (approximating the real atmosphere). The 6-h analysis corrections are separated into the model bias (obtained by time averaging the errors over several years), the periodic (seasonal and diurnal) component of the errors, and the nonperiodic errors. An estimate of the systematic component of the nonperiodic errors linearly dependent on the anomalous state is generated.
Forecasts corrected during model integration with a seasonally dependent estimate of the bias remain useful longer than forecasts corrected a posteriori. The diurnal correction (based on the leading EOFs of the analysis corrections) is also successful. State-dependent corrections using the full-dimensional Leith scheme and several years of training actually make the forecasts worse due to sampling errors in the estimation of the covariance. A sparse approximation of the Leith covariance is derived using univariate and spatially localized covariances. The sparse Leith covariance results in small regional improvements, but is still computationally prohibitive. Finally, singular value decomposition is used to obtain the coupled components of the correction and forecast anomalies during the training period. The corresponding heterogeneous correlation maps are used to estimate and correct by regression the state-dependent errors during the model integration. Although the global impact of this computationally efficient method is small, it succeeds in reducing state-dependent model systematic errors in regions where they are large. The method requires only a time series of analysis corrections to estimate the error covariance and uses negligible additional computation during a forecast. As a result, it should be suitable for operational use at relatively small computational expense.
Numerical weather forecasting errors grow with time as a result of two contributing factors. First, atmospheric instabilities amplify uncertainties in the initial conditions, causing indistinguishable states of the atmosphere to diverge rapidly on small scales. This phenomenon is known as internal error growth. Second, model deficiencies introduce errors during the model integration leading to external error growth. These deficiencies include inaccurate forcings and parameterizations used to represent the effect of subgrid-scale physical processes as well as approximations in numerical differentiation and integration, and result in large-scale systematic forecast errors. Current efforts to tackle internal error growth focus on improving the estimate of the state of the atmosphere through assimilation of observations and ensemble forecasting (Anderson 2001; Whitaker and Hamill 2002; Ott et al. 2004; Hunt et al. 2004). Ideally, model deficiencies should be addressed by generating more accurate approximations of the forcing, improving the physical parameterizations, or by increasing the grid density to resolve smaller-scale processes. However, unresolved phenomena and model errors will be present no matter how accurate the parameterizations are, no matter how fine the grid resolution becomes. As a result, it is important to develop empirical algorithms to correct forecasts to account for model errors. Empirical methods that consider the model a “black box” are particularly valuable because they are independent of the model. As the methods of data assimilation and generation of initial perturbations become more sophisticated and reduce the internal error, the impact of model deficiencies and their dependence on the “flow of the day” become relatively more important (Hamill and Snyder 2000; Houtekamer and Mitchell 2001; Kalnay 2003).
Estimates of the systematic model error may be derived empirically using the statistics of the short-term forecast errors, measured relative to a reference time series. For example, the mean short-term forecast error provides a sample estimate of the stationary component of the model error bias. The output of operational numerical weather prediction models is typically postprocessed to account for any such known biases in the forecast field by model output statistics (MOS; Glahn and Lowry 1972; Carter et al. 1989). However, offline bias correction has no dynamic effect on the forecast; internal and external errors are permitted to interact nonlinearly throughout the integration as they grow and eventually saturate. A more robust approach to error correction should be to estimate the short-term forecast errors as a function of the model state. A corresponding state-dependent correction would then be made every time step of the model integration to retard growth in the component of the error generated by the model deficiencies. Several studies have produced promising results by empirical correction in simulations using simple global circulation models (GCMs) with artificial model errors.
Leith (1978) proposed a statistical method to account for model bias and systematic errors linearly dependent on the flow anomalies. Leith derived a state-dependent empirical correction to a simple dynamical model by minimizing the tendency errors relative to a reference time series. Leith’s correction operator attempts to predict the error in the model tendency as a function of the model state. While Leith’s empirically estimated state-dependent correction term is only optimal for a linear model, it is shown to reduce the nonlinear model’s bias. However, the technique is subject to sampling errors and requires many orders of magnitude more computation time during the forecast than the biased model integration alone. The method is discussed in detail in section 6.
Faller and Schemm (1977) used a similar technique on coarse- and fine-grid versions of a modified Burgers equation model. Statistical correction of the coarse-grid model by multiple regression to parameterize the effects of subgrid-scale processes improved forecast skill. However, the model equations were found to be insensitive to small perturbations of the initial conditions. They concluded that the coarse-grid errors were due entirely to truncation and that the procedure was sensitive to sampling errors. Schemm et al. (1981) introduced two procedures for statistical correction of numerical predictions when verification data are only available at discrete times. Time interpolation was found to introduce errors into the regression equations, rendering the procedure useless. Applying corrections only when verification data were available, they were successful in correcting artificial model errors, but the procedure failed on the National Meteorological Center (NMC) barotropic-mesh model. Later, Schemm and Faller (1986) dramatically reduced the small-scale 12-h errors of the NMC model. Errors at the larger scales grew due to randomization of the residual errors by the regression equations.
Klinker and Sardeshmukh (1992) used January 1987 6-h model integrations to estimate the state-independent tendency error in operational European Centre for Medium-Range Weather Forecasts (ECMWF) forecasts. By switching off each individual parameterization, they isolated the contribution to the error of each term. They found that the model’s gravity wave parameterization dominated the 1-day forecast error. Saha (1992) used a simple Newtonian relaxation or nudging of a low-resolution version of the NMC operational forecast model to estimate systematic errors. Verifying against the high-resolution model, Saha was able to reduce systematic errors in independent forecasts by adding artificial sources and sinks to correct errors in heat, momentum, and mass. Nudging and a posteriori correction were seen to give equivalent forecast improvements.
By nudging of several low-resolution GCMs toward a high-resolution model, Kaas et al. (1999) estimated empirical orthogonal functions (EOFs) for horizontal diffusion. They found that the kinetic energy dissipation due to unresolved scales varied strongly with model resolution. The EOF corrections were most effective in reducing the climatological errors of the model whose resolution was closest to that of the high-resolution model. D’Andrea and Vautard (2000) estimated the time-derivative errors of the three-level global quasigeostrophic (QG) model of Marshall and Molteni (1993) by finding the model forcing that minimized the 6-h forecast errors relative to a reference time series. They derived a flow-dependent empirical parameterization from the mean tendency error corresponding to the closest analogues in the reference time series. The subsequent corrected forecasts exhibited improved climate statistics in the Euro–Atlantic region, but not in others.
DelSole and Hou (1999) perturbed the parameters of a two-layer QG model on an 8 × 10 grid (Ngp = 160 degrees of freedom) to generate a “nature” run and then modified it to create a “model” containing a primarily state-dependent error. They found that a state-independent error correction did not improve the forecast skill. By adding a state-dependent empirical correction to the model, inspired by the procedure proposed by Leith, they were able to extend forecast skill up to the limits imposed by observation error. However, Leith’s technique requires the solution of a Ngp-dimensional linear system. As a result, before the procedure can be considered useful for operational use, a low-dimensional representation of Leith’s empirical correction operator is required.
Renwick and Wallace (1995) used several low-dimensional techniques described by Bretherton et al. (1992) to identify predictable anomaly patterns in 14 winters of Northern Hemisphere 500-mb height fields. The most predictable anomaly pattern in ECMWF operational model forecasts was found to be similar to the leading EOF of the analyzed 500-mb height anomaly field. Applying canonical correlation analysis to the dependent sample (first seven winters), they found the amplitude of the leading pattern to be well predicted and showed the forecast skill to increase with the amplitude of the leading pattern. The forecast skill of the independent sample (second seven winters) was not well related to the patterns derived from the dependent sample. A posteriori statistical correction of independent sample forecasts slightly decreased RMS errors, but damped forecast amplitude considerably. They concluded that continuing model improvements should provide better results than statistical correction and skill prediction in an operational setting.
Ferranti et al. (2002) used singular value decomposition (SVD) (Golub and Van Loan 1996) analysis to identify the relationship between fluctuations in the North Atlantic Oscillation and ECMWF operational forecasts errors in 500-hPa height for seven winters in the 1990s. They found that the anomalous westerly (easterly) flow over the eastern North Atlantic (western Europe) was weakened by a consistent underestimation of the magnitude of pressure anomalies over Iceland. Large (small) error amplitudes were seen to be located in regions of the maximum westerly (easterly) wind anomaly; the trend was reversed on the flanks of the jet. The flow-dependent component of the errors accounted for 10% of the total error variance.
The purpose of the present study is to explore the feasibility of estimating and correcting systematic model errors using a simple and efficient procedure that could be applied operationally. The monthly, diurnal, and state-dependent components of the short-term forecast errors are estimated for two simple but realistic GCMs using the National Centers for Environmental Prediction (NCEP) reanalysis as truth. Section 2 describes the two GCMs used for the numerical experiments. Section 3 describes the simple method of data assimilation used to generate a time series of model forecasts and the technique used to estimate the corresponding systematic errors. Section 4 illustrates the substantial forecast improvement resulting from state-independent correction of monthly model forcing when verifying against independent data. Section 5 describes attempts to generate full-dimensional and low-order empirical estimates of model error as a function of the model state, using Leith’s method and a new computationally inexpensive approach based on SVD. The paper concludes with a discussion of implications for operational use and future directions of research.
2. Global circulation models
a. The quasigeostrophic model
The first model used in this study was developed by Marshall and Molteni (1993); it has been used for many climate studies (e.g., D’Andrea and Vautard 2000). The model is based on spherical harmonics, with triangular truncation at wavenumber 21. The QG model has three vertical levels (800, 500, and 200 hPa) and integrates the quasigeostrophic potential vorticity equation with dissipation and forcing:
where ψ is the streamfunction and q is the potential vorticity (q ≈ ∇2ψ); J represents the Jacobian operator of ψ and q. The linear dissipation D is dependent on ψ and orography, and includes a relaxation coupling the three vertical levels. The forcing term S is time independent but varies spatially, representing the average effects of diabatic heating and advection by the divergent flow. This forcing is determined by requiring that the time-averaged values of the other terms in (1) are zero. In other words, the forcing is defined so the vorticity tendency is zero for the climatology (given by the mean NCEP reanalysis streamfunction during January and February from 1980 to 1990, the model simulates a perpetual winter). If the climatological streamfunction and vorticity are denoted as ψ̂ and q̂, the time average of (1) can be written
b. The SPEEDY model
The primitive equation model used in this study [known as SPEEDY (simplified parameterizations, primitive equation dynamics); Molteni 2003] has triangular truncation T30 at seven sigma levels (0.950, 0.835, 0.685, 0.510, 0.340, 0.200, and 0.080). The basic prognostic variables are vorticity (ζ), divergence (∇), absolute temperature (T), specific humidity (Q), and the logarithm of surface pressure [log(ps)]. These variables are postprocessed into zonal and meridional wind (u, υ), geopotential height (Z), T, Q, and log(ps) at pressure levels (925, 850, 700, 500, 300, 200, and 100 hPa). The model dissipation and time-dependent forcing are determined by climatological fields of sea surface temperature (SST), surface temperature, and moisture in the top soil layer (about 10 cm), snow depth, bare-surface albedo, and fractions of sea ice, land–sea, and land surface covered by vegetation. The model contains parameterizations of large-scale condensation, convection, clouds, shortwave and longwave radiation, surface fluxes, and vertical diffusion (Molteni 2003). No diurnal variation exists in the model forcing; forcing fields are updated daily.
Despite the approximations made in deriving each model, they produce realistic simulations of extratropical variability, especially in the Northern Hemisphere (Marshall and Molteni 1993; Molteni 2003). The SPEEDY model also provides a more realistic simulation of the Tropics, as well as the seasonal cycle. Since the model forcings (including SST) are determined by the climatology, one cannot expect realistic simulations of interannual variability. More advanced GCMs include not only observed SST but also changes in greenhouse gases and aerosols, as well as more advanced physical parameterizations. Despite the absence of variable forcing, if run for a long period of time (decades), both models reproduce a realistic climatology. While they were designed for climate simulations, each model produces forecasts that remain useful globally for about 2 days.
A pair of simple schemes was used to estimate model errors. The schemes are advantageous in that they provide estimates of model errors at the analysis time, when they are still small and growing linearly, and because they can be carried out at the cost of essentially one model integration. The first procedure is inspired by Leith (1978), who integrated “true” initial conditions for 6 h to measure the difference between the forecast and the verifying analysis. A schematic illustrating the procedure, hereafter referred to as direct insertion, is shown in Fig. 1.
Writing x(t) for the GCM state vector at step t and M[x(t)] for the model tendency at step t, the model tendency equation is given by
The analysis correction at step t is given by the difference between the truth xt(t) and the model forecast state xfh(t), namely,
where h is the forecast lead time, typically h = 6 h.
The second (alternative) procedure for estimating model errors is Newtonian relaxation or nudging (Hoke and Anthes 1976; Leith 1991; Saha 1992), done by adding an additional forcing term to relax the model state toward the reference time series. When reference data are available (every 6 h), the tendency equation during nudging is given by
At intermediate time steps, when data are unavailable, the tendency is given by (3). If the relaxation time scale τ is too large, model errors will grow before the time derivative can respond (Kalnay 2003, p. 141). If τ is chosen too small, the tendency equation will diverge. Figure 2 shows that the sensitivity of the assimilation error to τ for the QG and the SPEEDY models is similar, and that the optimal time scale is τ = 6 h, corresponding to the frequency (h) of the assimilation. This choice for τ generates analysis corrections whose statistical properties (e.g., mean, variance, EOFs) are qualitatively very similar to those obtained through direct insertion. As a result, for the remainder of the paper we will only consider time series generated by direct insertion.
The reference time series used to estimate model errors is given by the NCEP reanalysis. NCEP reanalysis values of model prognostic variables are available in 6-h corrections, they are interpolated to the model grid and denoted at step t by xt(t). Observations of the reanalysis are taken as truth with no added noise or sparsity; observational noise is the focus of much research in data assimilation (e.g., Ott et al. 2004), but its influence is ignored in this context since the reanalysis is already an approximation of the evolution of the atmosphere. Direct insertion is performed with the QG model by integrating NCEP reanalysis wintertime vorticity for the years between 1980 and 1990. The SPEEDY model is integrated using NCEP reanalysis values of ζ, ∇, T, Q, and log(ps) for the years between 1982 and 1986. A longer time period was used to train the QG model because it has an order of magnitude fewer degrees of freedom than the SPEEDY model.
The time series of analysis corrections is separated by month and denoted δxa6(t)Nreft=1 [Nref = 31 × 4 × 5 (days × 6-h intervals × years) for January training of SPEEDY]. The time average of the analysis corrections (bias) is given by 〈δxa6〉 = (1/Nref) ΣNreft=1 δxa6(t), and 〈xt〉 = (1/Nref) ΣNreft=1 xt(t) is the 5-yr reanalysis climatology for the month in which steps t = 1, . . . , Nref occur. The method of direct insertion is also used to generate 〈δxah〉 for h = 6j (j = 2, 3, . . . , 8), giving 12-h, 18-h, . . . , and 48-h mean bias estimates. These estimates will be used to make an a posteriori bias correction. The reanalysis states, model forecasts, and corresponding analysis corrections are then separated into their anomalous and time average components, namely,
Figure 3 illustrates the bias calculated from 5 yr of 6-h SPEEDY forecasts of u, T, and Q for January and July. These state-independent errors are clearly associated with contrasts in land–sea forcing, topographic forcing, and the position of the jet. The zonal wind and temperature exhibit a large polar bias, especially in the winter hemisphere. The 6-h zonal wind errors show an underestimation of the westerly jets of 2–5 m s−1 east of the Himalayan mountain range (January) and east of the Australian Alps (July), especially on the poleward side. The mean temperature error over Greenland is larger during the Northern Hemisphere winter. There is little humidity bias in the polar regions, most likely due to the lack of moisture variability near the poles. The SPEEDY convection parameterization evidently transports too little moisture from lower levels (which are too moist) to upper levels (which are too dry). The following section describes attempts to correct the model forcing to account for this bias.
4. State-independent correction
a. Monthly bias correction
In this section, the impact of correcting for the bias of the model during the model integration is compared with a correction a posteriori, as done, for example, in MOS. In both cases the impact of the corrections on 5-day forecasts is verified using periods independent from the training periods. The initial conditions for QG forecasts are taken from the wintertime NCEP reanalysis data between 1991 and 2000, and for the SPEEDY forecasts are taken from the NCEP reanalysis data for 1987.
The control forecast is started from reanalysis initial conditions and integrated with the original biased forcing M(x). The forecast corrected a posteriori is generated by computing xf6(1) + 〈δxa6〉 at step 1, xf12(2) + 〈δxa12〉 at step 2, . . . , xf48(8) + 〈δxa48〉 at step 8, etc. The corrections in u, υ, T, Q, and log(ps) at all levels are obtained from the training period for each month of the year, and attributed to day 15 of each month. The correction is a daily interpolation of the monthly mean analysis correction; for example, on 1 February, the time-dependent 6-h bias correction is of the form
so that the corrections are temporally smooth.
An online corrected or debiased model forecast is generated with the same initial condition, but with a corrected model forcing M+. The tendency equation for the debiased model forecast is given by
where the bias correction is divided by 6 h because it was computed for 6-h forecasts but it is applied every time step. The skill of each forecast is measured by the anomaly correlation (AC), given at time t by
where ϕs is the latitude of grid point s and Ngp is the number of degrees of freedom in the model state vector. The AC is essentially the inner product of the forecast anomaly and the reanalysis anomaly, with each gridpoint contribution weighted by the cosine of its latitude and normalized so that a perfect forecast has an AC of 1. It is common to consider that the forecast remains useful if AC > 0.6 (Kalnay 2003, p. 27).
Figure 4 illustrates the success of the bias correction for the QG model. Both the a posteriori and the online correction of the bias significantly increase the forecast skill. However, the improvement obtained with the online correction is larger than that obtained with the a posteriori correction, indicating that the correction made during the model integration reduces the model error growth. Applying the bias correction every 6 h for a single time step gave slightly worse results than applying it every time step.
Similar results were obtained for the SPEEDY model and are presented for the 500-hPa zonal wind, temperature, and geopotential height in Fig. 5 (top row) for the month of November 1987. To show the vertical and monthly dependence of the correction, the time of crossing of AC = 0.6 is plotted for three vertical levels for the control (second row) and online corrected (debiased) SPEEDY forecasts (third row) as a function of the month. The bottom row presents the relative improvement. For the wind, the debiasing leads to an increase in the length of useful skill of over 60% at 850 hPa (where the errors are largest), about 50% at 500 hPa, and about 10% at 200 hPa, where the errors are smallest. For the temperature, where the skill is less dependent on pressure level, the improvements are between 20% and 40% at all levels. There is not much dependence on the annual cycle, possibly because the verification is global.
As in the QG model, a bias correction made during the model integration is more effective than a bias correction performed a posteriori, although they both result in significant improvements. This is important because it indicates that the model deficiencies do not simply add errors; external errors are amplified by internal error growth. Further iteration of the procedure does not improve model forecasts. That is, finding the mean 6-h forecast error in the debiased model M+ (10) and correcting the forcing again does not extend the usefulness of forecasts.
The positive impact of the interactive correction is also indicated by an essentially negligible mean error in the debiased QG model (not shown). The correction of SPEEDY by 〈δxa6〉 removes the large polar errors from the mean error fields, but some of the subpolar features remain with smaller amplitudes (cf. Fig. 6 with Fig. 3). This suggests that a nonlinear correction to the SPEEDY model forcing may be more effective.
b. Error growth
where α is the variance of the error anomalies, β is the growth rate of error anomalies due to instabilities (internal), and δ is the growth rate due to model deficiencies (external). These error growth rate parameters may be estimated from the AC for the control and debiased model forecasts. The 500-hPa November 1987 estimates of these growth rates (Table 1) demonstrate significant reduction in the external error growth rate resulting from online state-independent error correction. The only exception is the moisture, suggesting that the correction of the moisture bias conflicts with the parameterization tuned to the control model bias. As could be expected, bias correction changes the internal error growth rate much less than the external rate.
c. Diurnal bias correction
In addition to the time-averaged analysis corrections, the leading EOFs of the anomalous analysis corrections are computed to identify the time-varying component. The spatial covariance of these corrections over the dependent sample (recomputed using the debiased model M+) is given by ≡ 〈δxa6′ δxa6′T〉. The two leading eigenvectors of identify patterns of diurnal variability that are poorly represented by the model (see Fig. 7, top row). Since SPEEDY solar forcing is constant over each 24-h period, it fails to resolve diurnal changes in forcing due to the rotation of the earth. Consequently, SPEEDY underestimates (overestimates) the near-surface daytime (nighttime) temperatures. This trend is most evident over land in the Tropics and summer hemisphere.
The time-dependent amplitude of the leading modes can be estimated by projecting the leading eigenvectors of onto δxa6′(t) over the dependent sample. As expected from the wavenumber-1 structure of the EOFs, the signals are out of phase by 6 h (see Fig. 7, middle row). An estimate of time dependence of the diurnal component of the error is generated by averaging the projection over the daily cycle for the years 1982–86. A diurnal correction of the seasonally debiased model M+ is then computed online by linearly interpolating EOFs 1 and 2 as a function of the time of day. The diurnally corrected model is denoted M++. Correction of the debiased SPEEDY forcing to include this diurnal component reduced the 6-h temperature forecast errors for the independent sample (1987), most notably over land (see Fig. 7, bottom row). Although more sophisticated GCMs include diurnal forcings, it is still common for their forecast errors to have a significant time-dependent signal (Marshall et al. 2003). This signal can be estimated and corrected as has been done here.
5. State-dependent correction
a. Leith’s empirical correction operator
The time series of anomalous analysis corrections provides a residual estimate of the linear state-dependent model error. Leith (1978) suggested that these corrections could be used to form a state-dependent correction. Leith sought an improved model of the form
where 𝗟x′ is the state-dependent error correction. The tendency error of the improved model is given by
where ẋt is the instantaneous time derivative of the reanalysis state. The mean square tendency error of the improved model is given by 〈gTg〉. Minimizing this tendency error with respect to 𝗟, Leith’s state-dependent correction operator is given by
where ẋt is approximated with finite differences by
and Δt = 6 h for the reanalysis. Note that the term ẋt − M++(xt) can then be estimated at time t using only the analysis corrections, namely,
This method of approximating ẋt − M++(xt) is attractive because the analysis corrections of an operational model are typically generated during preimplementation testing. As a result, the operator 𝗟 may be estimated with no additional model integrations.
To estimate 𝗟, we first recompute the time series of residuals δxa6′(t) using the online debiased and diurnally corrected model M++. The cross covariance (Bretherton et al. 1992) of the analysis corrections with their corresponding reanalysis states is given by ≡ 〈δxa6′xt′T〉, the lagged cross covariance is given by ≡ 〈δxa6′(t)xt′T(t − 1)〉, and the reanalysis state covariance is given by 𝗖xtxt ≡ 〈xt′xt′T〉. The covariances can be computed offline separately on time series pairs δxa6′ and xt′ corresponding to each month so that each month has its own covariance matrices. In computing the covariance matrices, we found that weighting each grid point by the cosine of latitude made little difference, a result consistent with Wallace et al. (1992).
The finite-difference approximation of ẋt − M++(xt) given by (17) results in an estimate of 𝗟 in terms of the covariance matrices and 𝗖xtxt. The empirical correction operator is given by
Note that w = 𝗖xtxt−1; x′ is the anomalous state normalized by its empirically derived covariance; 𝗟x′ = ; and w is the best estimate of the anomalous analysis correction corresponding to the anomalous model state x′ over the dependent sample. Assuming that sampling errors are small and that the external forecast error evolves linearly with respect to lead time, this correction should improve the forecast model M++. Small internal forecast errors grow exponentially with lead time, but those forced by model error tend to grow linearly (e.g., Dalcher and Kalnay 1987; Reynolds et al. 1994). Therefore, the Leith operator should provide a useful estimate of the state-dependent model error.
Using a model with very few degrees of freedom and errors designed to be strongly state dependent, DelSole and Hou (1999) found that the Leith operator was successful in correcting state-dependent errors relative to a nature run. However, direct computation of 𝗟x′ requires O(N 3gp) floating point operations (flops) every time step. For the global QG model, Ngp = O(104), for the SPEEDY model, Ngp = O(105), and for operational models Ngp = O(107). It is clear that this operation would be prohibitive. Approaches to reduce the dimensionality of the Leith correction are now described.
b. Covariance localization
Covariance matrices and 𝗖xtxt may be computed offline using the dependent sample. To make the computation more feasible, correlations between different anomalous dynamical variables at the same level are ignored, for example, u and T at sigma level 0.510 in SPEEDY. Correlations between identical anomalous dynamical variables at different levels, for example, q at 800 and 500 hPa in QG, are ignored as well. Miyoshi (2005) found these correlations to be significantly smaller than those between identical variables at the same level in the SPEEDY model. The assumption of univariate and unilevel covariances could be removed in an operational implementation by combining geostrophically balanced variables into a single variable before computing covariances, as is usually done in variational data assimilation (Parrish and Derber 1992). To further simplify evaluation of the procedure, we consider only covariance at identical levels for the variables u, υ, and T; covariance in Q and log(ps) are ignored. In doing so, a block diagonal structure is introduced to and 𝗖xtxt, with each block corresponding to the covariance of a single variable at a single level.
A localization constraint is also imposed on the covariance matrices by setting to zero all covariance elements corresponding to grid points farther than 3000 km away from each other. In an infinite dependent sample, these covariance elements would be approximately zero. This constraint imposes a sparse, banded structure on each block in and 𝗖xtxt. Together, the two constraints significantly reduce the flops required to compute 𝗟x′. Another advantage of the reduced operator is that it is less sensitive to sampling errors related to the length of the reanalysis time series. Figure 8 illustrates the variance explained by the first few SVD modes of the dense and sparse correction operators corresponding to the January zonal wind at sigma level 0.2. The localization constraint is imposed on the covariance block corresponding to u at sigma level 0.2 in January for both and 𝗖xtxt before SVD of 𝗟 = 𝗖xtxt−1. The explained variance is given by
where σi is the ith singular value and the univariate covariance block is Nu × Nu. It is useful in determining how many modes may be truncated in approximating the correction operator 𝗟. To explain 90% of the variance, more than 400 modes of the dense correction operator are required whereas only 40 are required of the sparse operator. Covariance localization has the effect of concentrating the physically important correlations into the leading modes.
To test Leith’s empirical correction procedure, several 5-day forecasts similar to those described earlier are performed. The initial conditions are taken from a sample independent of that which was used to estimate the correction operator 𝗟. The first forecast is made with the online state-independent corrected model M++. A second forecast is made using the state-dependent error-corrected model (13). Forecasts corrected online by the dense (univariate covariance) operator 𝗟 performed approximately 10% worse (and took approximately 100 times longer to generate) than those corrected by the sparse operator, indicating the problems of sampling without localization. Even when using the sparse operator, the generation of forecasts corrected online still took a prohibitively long time, and only improved forecasts by 1 h. This indicates that despite attempts to reduce the dimensionality of the correction operator, the sparse correction still requires too many flops to be useful with an operational model. A further reduction of the degrees of freedom is described below, using only the relevant structure of the correction operator.
c. Low-dimensional approximation
An alternative formulation of Leith’s correction operator is introduced here, based on the correlation of the leading SVD modes. The dependent sample of anomalous analysis corrections and model forecasts are normalized at each grid point by their standard deviation so that they have unit variance; they are denoted and . They are then used to compute the cross covariance, given by ≡ 〈T〉; normalization is required to make a correlation matrix. The matrix is then restricted to the same univariate covariance localization as previously described. The cross covariance is then decomposed to identify pairs of spatial patterns that explain as much of possible of the mean-squared temporal covariance between the fields and . The SVD is given by
where the columns of the orthonormal matrices 𝗨 and 𝗩 are the left and right singular vectors uk and vk. Here Σ is a diagonal matrix containing singular values σk whose magnitude decreases with increasing k. The leading patterns u1 and v1 associated with the largest singular value σ1 are the dominant coupled signals in the time series and , respectively (Bretherton et al. 1992). Patterns uk and vk represent the kth most significant coupled signals. Expansion coefficients or principal components (PCs) ak(t), bk(t) are obtained by projecting the coupled signals uk, vk onto and as follows:
PCs describe the magnitude and time dependence of the projection of the coupled signals onto the reference time series.
The heterogeneous correlation maps indicate how well the dependent sample of normalized anomalous analysis corrections can be predicted from the principal components bk (derived from the normalized forecast state anomalies ). It is computed by
This map is the vector of correlation coefficients between the gridpoint values of the normalized anomalous analysis corrections and the kth expansion coefficient of , namely bk. The SPEEDY heterogeneous correlation maps (Fig. 9) corresponding to the three leading coupled SVD modes between the normalized anomalous analysis corrections and model states illustrate a significant relationship between the structure of the 6-h forecast error and the model state, at least for the dependent sample. Locally, the time correlation reaches values of 60%–80%, but the global average is still small.
d. Low-dimensional correction
The most significant computational expense required by Leith’s empirical correction involves solving the Ngp-dimensional linear system 𝗖xtxtw(T) = x′(T) for w at each time T during a forecast integration. Taking advantage of the cross covariance SVD and assuming that ≈ and 𝗖xtxt ≈ , a reduction in computation for this operation, may be achieved by expressing w = 𝗖xfxf−1x′ as a linear combination of the orthonormal right singular vectors vk. The assumptions are reasonable since we are attempting to estimate the tendency error at time T, not T + 6 h. The empirical correction operator is given by
where for K < Ngp, only the component of w in the K-dimensional space spanned by the right singular vectors vk can contribute to this empirical correction. This dependence can be exploited as follows.
Assume the model state at time T during a forecast integration is given by x(T). The normalized state anomaly is given by x′(T) = x(T) − 〈xt〉, normalized at state vector element s by the standard deviation of xf6′ over the dependent sample. The component of explained by the signal vk may then be estimated by computing the new expansion coefficient (PC) bk(T) = vTk · . The right PC covariance over the dependent sample is given by 𝗖bb = 〈bbT〉, calculated using bk from (21). Because of the orthogonality of the right singular vectors vk, assuming an infinite sample, PCs bk and bj are uncorrelated for k ≠ j. As a result, we restrict the finite sample covariance 𝗖bb to be diagonal. The linear system
may then be solved for γ at time T. The cost of solving (24) is O(K) where K is the number of SVD modes retained, as opposed to the O(N 2gp) linear system required by Leith’s full-dimensional Leith empirical correction. The solution of (24) gives an approximation of w(T), namely,
where w̃(T) is generated by solving the linear system (24) in the space of the leading K singular vectors, while w(T) requires solving the Ngp-dimensional linear system (25) in the space of the model grid. Writing uck for the error signal uk weighted at state vector element s by the standard deviation of δxa6 over the dependent sample, the kth component of the state-dependent error correction at time T is given by
where σk is the coupling strength over the dependent sample, and the weight γk(T) assigned to correction signal uck indicates the sign and magnitude of the correction that may amplify, dampen, or shift the flow anomaly local to the pattern uck. Then the low-dimensionally corrected model is given at time T by
so that during forecasts, a few (K) dominant model state signals vk can be projected onto the anomalous, normalized model state vector. The resulting sum ΣKk=1zk is the best representation of the original analysis correction anomalies δxa6′ in terms of the current forecast state x(T). If the correlation between the normalized state anomaly x̃′(T) and the local pattern vk is small, the new expansion coefficient bk(T) will be negligible, no correction by uck will be made at time T, and therefore no harm will be done to the forecast. This fact is particularly important with respect to predicting behavior that may vary on a time scale longer than the training period, for example, El Niño–Southern Oscillation (ENSO) events (Barnston et al. 1999).
A pair of examples of the correction procedure are shown in Fig. 10. SVD mode k = 2 in T(K) at sigma level 0.95 (top left) suggests that warm anomalies over the western Pacific are typically too warm. Mode k = 3 in u (m s−1) at sigma level 0.2 (top right) suggests that fronts of the shape v3 over the eastern Pacific should be farther northeast. Online low-dimensional state-dependent correction improves the local RMS error by 21% in 6-h forecasts of T (left) and 14% in u (right). Retaining K = 10 modes of the SVD, state-dependent correction by (29) of both the QG and SPEEDY models improved forecasts by a few hours. This indicates that only a small component of the error can be predicted given the model state over the independent sample (1987). The low-dimensional correction outperformed the sparse Leith operator (Table 2) indicating that the SVD truncation reduces spurious correlations unaffected by the covariance localization. Correction by K = 5 and K = 20 modes of the SVD were slightly less successful. Heterogeneous correlation maps for modes K > 20 did not exceed 60% for the dependent sample. The corrections are more significant in regions where ρ is large and at times in which the state anomaly projects strongly on the leading SVD modes (see examples in Fig. 10). The global averaged improvement is small since the state-dependent corrections are local in space and time. Nevertheless, given that the computational expense of the low-dimensional correction is orders of magnitude smaller than that of even the sparse correction operator, and the results are better, it seems to be a promising approach to generating state-dependent corrections.
The low-dimensional representation of the error is advantageous compared to Leith’s correction operator for several reasons. First, it reduces the sampling errors that have persisted despite covariance localization by identifying the most robust coupled signals between the analysis correction and forecast state anomalies. Second, the added computation is trivial; it requires solving a K-dimensional linear system and computing K inner products for each variable at each level. Finally, the SVD signals identified by the technique can be used by modelers to isolate flow-dependent model deficiencies. In ranking these signals by strength, SVD gives modelers the ability to evaluate the relative importance of various model errors.
Covariance localization (which led to better results when using the Leith operator) is validated by comparing the signals uk and vk obtained from the SVD of the sparse and dense versions of . The most significant structures in the dominant patterns (e.g., ul, v1) of the sparse covariance matrix are very similar to those obtained from the dense version. However, the dominant patterns in the dense covariance matrix also contain spurious, small-amplitude noisy structures related to the nonphysical, nonzero covariance between distant grid points. Given a long enough reanalysis time series, this structure would disappear. The structures identified in the sparse covariance matrix are thus good approximations of the physically meaningful structures of the dense covariance matrix. Qualitatively similar structures (e.g., mean, variance, EOFs) were observed when training was limited to just one year, suggesting that an operational implementation of this method should not require several years of training.
Since SVD is performed independently on univariate blocks of , patterns u1 in u and u1 in υ are not necessarily in geostrophic balance. Nevertheless, SPEEDY variables presumably remain in approximate geostrophic balance after state-dependent correction because there exist patterns uj in u and uk in υ that are in geostrophic balance with u1 in u and u1 in υ respectively. Also, the large spatial extent of the covariance localization produces SVD patterns that are synoptically balanced.
6. Summary and discussion
This paper considers the estimation and correction of state-independent (seasonal and diurnal) and state-dependent model errors of a pair of simple GCMs. The two approaches used to create the time series of analysis corrections and model states needed for training, (direct insertion and nudging toward a reanalysis used as an estimate of the true atmosphere) are simple; they require essentially a single long model integration and give similar estimates of the bias. In an operational setting, time series of model states and analysis increments are already available from preimplementation testing.
Although the procedure is inspired by Leith (1978) and DelSole and Hou (1999), it is tested here using realistic models, and using as nature a reanalysis under the assumption that it is much closer to the real atmosphere than any model. The online state-independent correction, including the EOFs associated with the diurnal cycle, resulted in a significant improvement in the forecast skill (as measured by the AC). Unlike Saha (1992), this improvement was larger than that obtained by a posteriori corrections of the bias, indicating the importance of correcting the error during the integration. The results are also significantly different from those of DelSole and Hou (1999), who obtained a very small improvement from the state-independent correction, and a very large improvement from the state-dependent correction using Leith’s formulation. Their results were probably optimistic in that the model errors were by construction very strongly state dependent. The results presented here, found using global atmospheric models and comparing with a reanalysis of the real atmosphere, are probably more realistic with respect to the relative importance of mean and state-dependent corrections. Nevertheless, our results are probably optimistic since the improvement of the debiased model M+ (10) relative to the biased model M (3) is larger for the simple GCMs tested here than could be expected in an operational model. It is not clear how large the analysis correction and forecast state coupled signal size would be for more sophisticated models, but operational evidence suggests that state dependent errors are not negligible.
It was necessary to introduce a horizontal and vertical localization of the components of Leith’s empirical correction operator to reduce sampling problems. Multilevel and multivariate covariances were ignored to make the computation practical. The assumptions underlying the localization require model-dependent empirical verification; implementation on a more realistic model may require that the localization be multivariate in order to have a balanced correction. The Leith–DelSole–Hou method with the original dense covariance matrix makes forecasts worse. With the sparse covariance, however, there is an improvement of about 1 h, still at a large computational cost.
A new method of state-dependent error correction was introduced, based on SVD of coupled analysis correction and forecast state anomalies. The cross covariance is the same as that which appears in Leith’s formulation, but it would be prohibitive to compute it using an operational model. The new method, based on using the SVD heterogeneous correlation maps as the basis of linear regression, doubles the improvement and is many orders of magnitude faster. The method can be applied at a rather low cost, both in the training and in the correction phases, and yields significant forecast improvements, at least for the simple but realistic global QG and SPEEDY models. It could be applied with low computational cost and minimal sampling problems to data assimilation and ensemble prediction, applications where accounting for model errors has been found to be important. The method may be particularly useful for forecasting of severe weather events where a posteriori bias correction will typically weaken anomalies. The patterns identified by SVD could also be used to identify sources of model deficiencies and thereby guide future model improvements.
A disadvantage of empirical correction is that operational model upgrades may require fresh computation of the dominant correction and state anomaly signals. However, analysis increments generated during preimplementation tests of an operational model can be used as a dependent sample to estimate the model error and state anomaly covariance. With such a collection of past data, it may not be necessary to run an operational model in order to generate the necessary sample.
Flow-dependent estimates of model error are of particular interest to the community attempting to develop an efficient ensemble Kalman filter for data assimilation (Bishop et al. 2001; Whitaker and Hamill 2002; Ott et al. 2004; Hunt et al. 2004). Naive data assimilation procedures assume the model error to be constant, and represent its effect by adding random noise to each ensemble member. More sophisticated procedures add a random selection of observed model tendencies to each ensemble member, or artificially increase the background forecast uncertainty through variance inflation. Other state-dependent procedures increase the forecast uncertainty in local contracting directions of state space (Danforth and Yorke 2006). The SVD technique described in this paper can be combined with an ensemble-based data assimilation scheme to provide time- and state-dependent estimates of the model error, for example, in the local ensemble transform Kalman filter (LETKF) being developed by the chaos group at the University of Maryland (Ott et al. 2004; Hunt et al. 2004). The empirical correction method described here involves local computations commensurate with the treatment of covariance localization in the LETKF. In a data assimilation implementation, the SVD method would involve appending a K-dimensional estimate of the model error to each ensemble member.
This research contributes to the lead author’s Ph.D. dissertation. The authors thank T. DelSole, J. Ballabrera, R. F. Cahalan, Z. Toth, S. Saha, J. Derber, and the anonymous reviewers for insightful comments and suggestions. The authors also thank F. Molteni and F. Kucharski for kindly providing the QG and SPEEDY models, and E. Kostelich for his essential guidance on their initial implementation. This research was supported by NOAA THORPEX Grant NOAA/NA040AR4310103 and NASA-ESSIC Grant 5-26058.
Corresponding author address: Christopher M. Danforth, Department of Mathematics, University of Maryland, College Park, College Park, MD 20742. Email: email@example.com