• Bishop, C. H., , and Z. Toth, 1999: Ensemble transformation and adaptive observations. J. Atmos. Sci., 56 , 17481765.

  • Bishop, C. H., , B. J. Etherton, , and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129 , 420436.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., , B. J. Etherton, , and S. J. Majumdar, 2006: Conditioned verification region selection in adaptive sampling. Quart. J. Roy. Meteor. Soc., 132 , 915934.

    • Search Google Scholar
    • Export Citation
  • Burgers, G., , P. J. van Leeuwen, , and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126 , 17191724.

  • Cohn, S. E., 1997: An introduction to estimation theory. J. Meteor. Soc. Japan, 75 , 257288.

  • Dee, D. P., 1995: On-line estimation of error covariance parameters for atmospheric data assimilation. Mon. Wea. Rev., 123 , 11281145.

    • Search Google Scholar
    • Export Citation
  • Etherton, B. J., , and C. H. Bishop, 2004: Resilience of hybrid ensemble/3DVAR analysis schemes to model error and ensemble covariance error. Mon. Wea. Rev., 132 , 10651080.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53 , 343367.

  • Evensen, G., , and P. J. van Leeuwen, 2000: An ensemble Kalman smoother for nonlinear dynamics. Mon. Wea. Rev., 128 , 18521867.

  • Gaspari, G., , and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125 , 723757.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128 , 29052919.

  • Hamill, T. M., , and C. Snyder, 2002: Using improved background-error covariances from an ensemble Kalman filter for adaptive observations. Mon. Wea. Rev., 130 , 15521572.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , J. S. Whitaker, , and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129 , 27762790.

    • Search Google Scholar
    • Export Citation
  • Hansen, J. A., , and A. L. Smith, 2000: The role of operational constraints in selecting supplementary observations. J. Atmos. Sci., 57 , 28592871.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., 2001: A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev., 129 , 123137.

  • Houtekamer, P. L., , and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126 , 796811.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., , H. L. Mitchell, , G. Pellerin, , M. Buehner, , M. Charron, , L. Spacek, , and B. Hansen, 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations. Mon. Wea. Rev., 133 , 604620.

    • Search Google Scholar
    • Export Citation
  • Hunt, B. R., and Coauthors, 2004: Four-dimensional ensemble Kalman filtering. Tellus, 56A , 273277.

  • Kalman, R. E., 1960: A new approach to linear filtering and prediction problems. Trans. Amer. Soc. Mech. Eng., J. Basic Eng., 82 , 3545.

    • Search Google Scholar
    • Export Citation
  • Kalman, R. E., , and R. S. Bucy, 1961: New results in linear filtering and prediction theory. Trans. Amer. Soc. Mech. Eng., J. Basic Eng., 83 , 95107.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., , C. M. Kishtawal, , T. E. LaRow, , D. R. Bachiochi, , Z. Zhang, , C. E. Williford, , S. Gadgil, , and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285 , 15481550.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., , and K. A. Emanuel, 1998: Optimal sites for supplementary observations: Simulation with a small model. J. Atmos. Sci., 55 , 399414.

    • Search Google Scholar
    • Export Citation
  • Majumdar, S. J., , C. H. Bishop, , B. J. Etherton, , and Z. Toth, 2002: Adaptive sampling with the ensemble transform Kalman filter. Part II: Field program implementation. Mon. Wea. Rev., 130 , 13561369.

    • Search Google Scholar
    • Export Citation
  • Masutani, M., and Coauthors, 2006: Observing system simulation experiments at NCEP. NCEP Office Note 451, 34 pp. [Available online at http://www.emc.ncep.noaa.gov/research/osse/NR/references/Masutani.2006.on451.pdf.].

  • Morss, R. E., , K. A. Emauel, , and C. Snyder, 2001: Idealized adaptive observation strategies for improving numerical weather prediction. J. Atmos. Sci., 58 , 210232.

    • Search Google Scholar
    • Export Citation
  • Roulston, M. S., , and L. A. Smith, 2003: Combining dynamical and statistical ensembles. Tellus, 55A , 1630.

  • Stensrud, D. J., , J-W. Bao, , and T. T. Warner, 2000: Using initial condition and model physics perturbations in short-range ensemble simulations of mesoscale convective systems. Mon. Wea. Rev., 128 , 20772107.

    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., , J. L. Anderson, , C. H. Bishop, , T. M. Hamill, , and J. S. Whitaker, 2003: Ensemble square-root filters. Mon. Wea. Rev., 131 , 14851490.

    • Search Google Scholar
    • Export Citation
  • Toth, Z., , and E. Kalnay, 1993: Ensemble forecasting at NMC: The generation of perturbations. Bull. Amer. Meteor. Soc., 74 , 23172330.

  • Wang, X., , C. H. Bishop, , and S. J. Julier, 2004: Which is better, an ensemble of positive–negative pairs or a centered spherical simplex ensemble? Mon. Wea. Rev., 132 , 15901605.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    A representative vorticity field (contours) and locations of the routine observation network sites (solid dots) for the OSSE. Vorticity values are in units of 10−5 s−1.

  • View in gallery

    Localization function for a grid point (11, 15) using (a) a 1000-km cutoff radius or (b) a 1400-km cutoff radius. The localization function is Eq. (4.10) of Gaspari and Cohn (1999).

  • View in gallery

    Model error correlations for a grid point (11, 15), as computed from (a) an isotropic covariance matrix and (b) from an explicitly calculated covariance matrix.

  • View in gallery

    A timeline of the experimental design. At the initial time, ti, 64 initial conditions and a control run are initialized and integrated by the forecast model 48 h to the verification time, tυ. At the observation time, to, 24 h, 72 observations are taken. These observations are used to produce new analyses and also used as input for preemptive forecasts. New analyses are made with a Kalman filter using one of five different background error covariance matrices. These new analyses are integrated an additional 24 h to the verification time. Preemptive forecasts are made using one of three different covariance matrices, propagating the impact of observations from to to tυ. Thick arrows represent 24-h model integrations of a single forecast. Triple arrows represent integrations of the control run and the 64-member ensemble.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 14 14 2
PDF Downloads 12 12 0

Preemptive Forecasts Using an Ensemble Kalman Filter

View More View Less
  • 1 Department of Geography and Earth Science, University of North Carolina at Charlotte, Charlotte, North Carolina
© Get Permissions
Full access

Abstract

An ensemble Kalman filter (EnKF) estimates the error statistics of a model forecast using an ensemble of model forecasts. One use of an EnKF is data assimilation, resulting in the creation of an increment to the first-guess field at the observation time. Another use of an EnKF is to propagate error statistics of a model forecast forward in time, such as is done for optimizing the location of adaptive observations. Combining these two uses of an ensemble Kalman filter, a “preemptive forecast” can be generated. In a preemptive forecast, the increment to the first-guess field is, using ensembles, propagated to some future time and added to the future control forecast, resulting in a new forecast. This new forecast requires no more time to produce than the time needed to run a data assimilation scheme, as no model integration is necessary. In an observing system simulation experiment (OSSE), a barotropic vorticity model was run to produce a 300-day “nature run.” The same model, run with a different vorticity forcing scheme, served as the forecast model. The model produced 24- and 48-h forecasts for each of the 300 days. The model was initialized every 24 h by assimilating observations of the nature run using a hybrid ensemble Kalman filter–three-dimensional variational data assimilation (3DVAR) scheme. In addition to the control forecast, a 64-member forecast ensemble was generated for each of the 300 days. Every 24 h, given a set of observations, the 64-member ensemble, and the control run, an EnKF was used to create 24-h preemptive forecasts. The preemptive forecasts were more accurate than the unmodified, original 48-h forecasts, though not quite as accurate as the 24-h forecast obtained from a new model integration initialized by assimilating the same observations as were used in the preemptive forecasts. The accuracy of the preemptive forecasts improved significantly when 1) the ensemble-based error statistics used by the EnKF were localized using a Schur product and 2) a model error term was included in the background error covariance matrices.

Corresponding author address: Brian J. Etherton, Dept. of Geography and Earth Science, University of North Carolina at Charlotte, 9201 University City Blvd., Charlotte, NC 28223. Email: betherto@uncc.edu

Abstract

An ensemble Kalman filter (EnKF) estimates the error statistics of a model forecast using an ensemble of model forecasts. One use of an EnKF is data assimilation, resulting in the creation of an increment to the first-guess field at the observation time. Another use of an EnKF is to propagate error statistics of a model forecast forward in time, such as is done for optimizing the location of adaptive observations. Combining these two uses of an ensemble Kalman filter, a “preemptive forecast” can be generated. In a preemptive forecast, the increment to the first-guess field is, using ensembles, propagated to some future time and added to the future control forecast, resulting in a new forecast. This new forecast requires no more time to produce than the time needed to run a data assimilation scheme, as no model integration is necessary. In an observing system simulation experiment (OSSE), a barotropic vorticity model was run to produce a 300-day “nature run.” The same model, run with a different vorticity forcing scheme, served as the forecast model. The model produced 24- and 48-h forecasts for each of the 300 days. The model was initialized every 24 h by assimilating observations of the nature run using a hybrid ensemble Kalman filter–three-dimensional variational data assimilation (3DVAR) scheme. In addition to the control forecast, a 64-member forecast ensemble was generated for each of the 300 days. Every 24 h, given a set of observations, the 64-member ensemble, and the control run, an EnKF was used to create 24-h preemptive forecasts. The preemptive forecasts were more accurate than the unmodified, original 48-h forecasts, though not quite as accurate as the 24-h forecast obtained from a new model integration initialized by assimilating the same observations as were used in the preemptive forecasts. The accuracy of the preemptive forecasts improved significantly when 1) the ensemble-based error statistics used by the EnKF were localized using a Schur product and 2) a model error term was included in the background error covariance matrices.

Corresponding author address: Brian J. Etherton, Dept. of Geography and Earth Science, University of North Carolina at Charlotte, 9201 University City Blvd., Charlotte, NC 28223. Email: betherto@uncc.edu

1. Introduction

Incorrect weather forecasts cause problems ranging from mere inconvenience to significant loss of life and property. Computer model forecasts, also referred to as guidance, are a valuable tool used by weather forecasters. Predictions by forecast agencies should improve if the accuracy of computer model guidance is increased. In addition to improving accuracy, producing guidance of similar accuracy but in less time is also beneficial to forecasters, as it would allow users of such guidance more time to make decisions. A forecast cycle (which for most global and regional models, usually starts at 0000, 0600, 1200, or 1800 UTC) consists of taking in all available observations, funneling those observations through a data assimilation scheme, initializing a forecast model, and then integrating that model out to the desired forecast time. This cycle of assimilation and integration often takes a couple of hours to complete.

A forecast ensemble is a set of computer forecasts of the atmosphere, each different from the others by some means. Four methods that can be employed in the generation of a forecast ensemble are 1) using different forecast models (e.g., Krishnamurti et al. 1999), 2) using different options for parameterizing subgrid-scale physical processes (e.g., Stensrud et al. 2000), 3) using different initial conditions for the ensemble members (e.g., Toth and Kalnay 1993), and 4) stochastic methods (e.g., Roulston and Smith 2003). The mean of an ensemble provides a skillful forecast, on average more accurate than any of the ensemble members individually. In addition to the accurate forecast produced from the ensemble mean, the range of solutions produced by an ensemble provides valuable forecast information as well. Areas of low spread (where all forecasts are similar) indicate areas of high confidence in the forecasts, areas of high spread (very different forecasts) indicate areas of low forecast confidence.

The focus of this manuscript is the “preemptive” forecast, a model forecast made by using observations to update model forecasts, but without having to assimilate observations and integrate the forecast model. The following five subsections of this introduction will step through the tools used in making a preemptive forecast: (a) the ensemble Kalman filter, (b) the use of the ensemble Kalman filter in targeting and data assimilation, as well as the issues associated with (c) rank deficiency and (d) model error, and finally (e) a description of what a preemptive forecast is.

a. Kalman filtering for data assimilation

The Kalman filter (KF; Kalman 1960; Kalman and Bucy 1961) is a method of estimating the state of a system. There are two components of a KF: a first step where the estimate of the state of the system and an estimate of the uncertainty of that state estimate are adjusted to new observations. The second step is a forecast step where the updated state and the uncertainty estimate are propagated forward in time. The KF estimates the state of a system by interpolating the differences between observations and the first-guess field using two error covariance matrices. These two matrices are a background (or first guess) error covariance matrix and an observation error covariance matrix. These covariance matrices are often written as 𝗣 and 𝗥, respectively. The process of combining a first-guess field and observations into a new estimate of the state of a system is known as data assimilation.

A KF in which the errors of the first-guess field and the errors of the observational data are assumed to be Gaussian is an extended Kalman filter (EKF). The EKF is an extension of the KF for nonlinear dynamics, where the error covariance is propagated forward in time using the tangent linear of the deterministic evolution of the control forecast. The EKF also accounts for nonlinear observation operators. A KF that uses ensembles to estimate the background error covariance matrix is known as an ensemble Kalman filter (EnKF). Evensen (2003) provides a thorough review of recent work in data assimilation using many different forms of ensemble Kalman filters. Some EnKFs add random perturbations to the observed values, and also use a different set of such perturbations when updating the state of each ensemble member. These are known as “stochastic” ensemble Kalman filters, examples of which are given in Houtekamer and Mitchell (1998) and Burgers et al. (1998). An ensemble Kalman Filter that does not stochastically account for errors of the observations is the ensemble square root filter (EnSRF). For a review of EnSRFs, see Tippett et al. (2003). As with all EnKFs, the EnSRF estimates the background error covariance matrix as the product of a matrix of ensemble perturbations with its transpose, and thus the background error covariance used in EnKFs is a sample covariance.

b. Targeting

In addition to data assimilation, ensembles have been used for selecting the locations for adaptive, or targeted, observations. Ensemble techniques have been used to estimate forecast uncertainty (e.g., Lorenz and Emanuel 1998; Morss et al. 2001; Hansen and Smith 2000). The ensemble transform technique (Bishop and Toth 1999) and the subsequent ensemble transform Kalman filter (ETKF; Bishop et al. 2001) use information from ensembles to identify regions that, when sampled, would lead to forecast improvements. The ETKF produces an error covariance matrix associated with a first-guess field (an estimate of the uncertainty of the first-guess field) and the change to this error covariance matrix that would result from observations at a given location. The ETKF then propagates this information forward in time. Hamill and Snyder (2002) took a similar approach, but their work focused on reducing analysis uncertainty whereas Bishop et al. (2001) focused on reducing future forecast uncertainty.

c. Rank deficiency

Ensemble-based error statistics are nearly always rank deficient. The number of members in an ensemble is usually several orders of magnitude lower than the number of degrees of freedom in the forecast model. As a result of this rank deficiency, the matrix does not contain smaller pieces of independent information than the order of the matrix. Long-distance error correlations are more likely to be spurious than to be actual. The localization of ensemble-based error statistics is a means of mitigating these spurious correlations. One approach to covariance localization is to limit the number of observations that can have an impact on a particular state variable at a particular location. Houtekamer and Mitchell (1998) and Evensen (2003), for example, use a cutoff radius so that observations are not assimilated beyond a certain distance for a grid point. Another approach is to localize the ensemble-based background error statistics using a localization function. Taking an element by element multiplication of a localization function with the ensemble-based background error covariance matrix (a Schur product or Hadamard product) produces a localized covariance matrix. This approach, used in Houtekamer (2001) and Hamill et al. (2001), is what we shall use.

d. Model error

An ensemble-based set of error statistics for the first-guess field is inherently on the subspace of the model attractor. As a result of the model not representing the system exactly, the ensemble-based covariance matrix may not represent the actual first-guess error statistics. For this reason, a “model error” term should be included in the background error covariance matrix (Dee 1995). In practice, this model error covariance matrix is difficult to estimate.

A commonplace approach for dealing with model error is to simply inflate the covariances from the ensemble such that the amplitude of the ensemble-based background error covariance matrix is as large as the sum of the error covariances on and off the ensemble subspace. While this approach ensures that the relative sizes of the background error covariance matrix and observation error covariance matrix are correct, this approach does nothing to represent background errors off of the model subspace. Consider the eigenvector/eigenvalue decomposition of the background error covariance matrix. Covariance inflation adjusts the trace of the background error covariance matrix (the sum of all the eigenvalues of that matrix) to ensure that the value is representative of the magnitudes of the errors, but covariance inflation does nothing to change the eigenvectors—the directions of error spanned by the background error covariance

An alternative approach to covariance localization is to add together the ensemble-based background error covariance matrix with a model error covariance matrix. Houtekamer et al. (2005) and Hamill and Snyder (2000) took this approach to incorporating error structures that were not on the ensemble subspace into the background error covariance matrix. In both cases, the authors used a three-dimensional variational data assimilation (3DVAR) like, isotropic, covariance matrix. This covariance matrix may or may not accurately represent model error, but the model error statistics are very difficult to know, and using a quasi-isotropic covariance matrix to represent the model error results in more accurate analyses and forecasts than if this term was neglected.

e. Preemptive forecasting

In data assimilation, an EnKF uses ensemble-generated error statistics to produce an increment to the first-guess field. In targeting, the ETKF (Bishop et al. 2001) uses ensemble-generated error statistics to estimate the likely impact of observations on the error statistics of future forecasts. Combining these two methodologies, the change to a future forecast resulting from current observations can be calculated using ensemble-based error statistics. In the same manor that adding in increment to the current forecast generates a new analysis, adding a propagated increment to a future forecast results in an updated, or preemptive, forecast. This technique is not dissimilar from the Kalman smoother (Evensen and van Leeuwen 2000) or a four-dimensional ensemble Kalman filter (Hunt et al. 2004), which also uses covariances at more than one time. Just like covariances in space allow observations to update model states that are far from the observation location in space, covariances in time allow observations to update model states that are far from the observation location in time.

The most compelling reason for producing a preemptive forecast is speed. Given that the ensemble of forecasts already exists, producing a preemptive forecast takes no longer to produce than it takes to run a data assimilation scheme. This is less time than it would take to run the data assimilation scheme and then integrate the forecast model out to the desired forecast verification time. Assuming that preemptive forecasts are of similar skill to the updated model forecasts, preemptive forecasts can provide additional time for weather-related decision making.

In section two of this paper, the experimental design and theory of the preemptive forecast is presented. Results from the observing system simulation experiment are given in section 3, and section 4 states conclusions from the work.

2. Methods

a. Experimental framework

The same simple model framework used in Bishop et al. (2001), Etherton and Bishop (2004), and Bishop et al. (2006) was used to construct an observing system simulation experiment (OSSE). In this OSSE, a doubly periodic barotropic vorticity model was used to depict the state of a two-dimensional turbulent flow. The turbulent flow, also known as the nature run, was a 300-day integration of this barotropic model. An example of this turbulent flow, the vorticity field on day 150 of our 300-day nature run, is shown in Fig. 1. The governing equation for the model is
i1520-0493-135-10-3484-e1
where Ψ is the streamfunction, δ and n are parameters associated with numerical diffusion, κ is a relaxation parameter, and β is the planetary vorticity gradient. The model covers a 32 × 32 gridpoint domain, for 1024 total grid points. A relaxation scheme relaxes the vorticity values at grid points by a quantity proportional to the difference between the zonally averaged vorticity, ∇2Ψ, and a reference zonal average of vorticity, ∇2Ψr. In this manner, local vorticity extrema are not preferentially eroded, and meridional gradients of zonally averaged vorticity are maintained.

A fictitious forecast agency is charged with predicting the state of this turbulent flow every 24 h, and uses the same barotropic vorticity model as was used to make the nature run to make its predictions. However, the vorticity relaxation used by the agency model is not the same as what was used for the nature run. Instead of relaxing back to the time-invariant barotropically unstable state that the nature run relaxes toward, the agency model relaxes toward a zonal state whose meridional vorticity distribution is given by the left-most grid points of the model at the beginning of each forecast. Consequently, while the state relaxed to by the truth run is always the same, the state relaxed to by the agency forecast model varies randomly from forecast to forecast. Our experiment therefore was not a perfect-model experiment by virtue of this difference.

On each day of the 300-day simulation, observations of vorticity were taken and used to make a new estimate of the state. Vorticity observations were simulated by adding random values (representing observational errors) from a Gaussian distribution to values of the nature run. Shown in Fig. 1, there are 72 observations in the domain, and the root-mean-squared observational error was taken to be 1.4 × 10−7 s−1, which is approximately 1/100 of the average 24-h forecast error (2.3 × 10−5 s−1). Observations were assimilated using either a hybrid EnKF or a pure EnKF. To generate the ensemble for each of the 300 days, 64 initial perturbations were generated using the ETKF spherical simplex ensemble generation scheme described in Wang et al. (2004), and integrating those initial conditions for 48 h using the agency forecast model.

While many different sets of error statistics were used for data assimilation (as will be outlined in the following subsections), it is the analysis made using the hybrid EnKF, using an isotropic 𝗕 matrix where no model error term was included, that serves as the official agency analysis. From this official analysis is produced the new agency forecast, and perturbations generated by the ETKF are added to this official analysis to produce the initial conditions for the 64 ensemble members. Thus, for each of the 300 days, there is only one 64-member forecast ensemble.

1) Hybrid ensemble Kalman filter

As given in Cohn (1997), the following equation,
i1520-0493-135-10-3484-e2
is used to combine observations of vorticity, y, with the first-guess field (in our experiment, a control run) xf, to produce the new analysis, xa. The fields y, x f, and xa are all valid at time to, the observation time. The observation operator 𝗛 is a simple mapping in our OSSE as observations are taken at model grid points (the 𝗛 matrix elements are all either 0 or 1). The matrix 𝗥 denotes the corresponding observation error covariance matrix. Observation errors are assumed to be uncorrelated, and thus, the 𝗥 matrix was diagonal, with values along the diagonal equal to 1.4 × 10−7 s−1 squared, approximately 2.0 × 10−14 s−2.
For the cycling forecast model, the hybrid analysis scheme was used to assimilate the 72 observations. Analyses produced from the hybrid scheme are the “official” analysis from the forecast agency, and model integrations of this analysis serve as the official deterministic forecast from the agency. The hybrid scheme (Hamill and Snyder 2000) approximates the forecast error covariance matrix 𝗣f, from Eq. (2), with a mix of parameterized covariances, 𝗕f, and flow-dependent, ensemble-based covariances, 𝗙f. The forecast error covariance matrix 𝗣f from Hamill and Snyder (2000) is given by
i1520-0493-135-10-3484-e3
where α is a constant, chosen to be 0.5, the value that produced the smallest forecast errors for our system (as used in Etherton and Bishop 2004). Here, 𝗕f is a 3DVAR-like time-invariant forecast error covariance matrix based on an isotropic correlation function. As in Bishop et al. (2001), Etherton and Bishop (2004), and Bishop et al. (2006), the covariance cij between vorticity errors at the ith and jth grid points is given by
i1520-0493-135-10-3484-e4
where D defines the correlation length scale, rij gives the separation distance between the ith and jth grid points, and the constant A gives the forecast error variance. Note that this function precisely describes the structure of the analysis increment that would be obtained from an observation of vorticity at the ith grid point using the isotropic error covariance model. This particular vorticity structure is proportional to the Laplacian of exp[ln(0.1)(rij/D)2 and hence is consistent with an exponential correlation model for the streamfunction. The factor ln(0.1) simply ensures that the streamfunction error correlation between grid points separated by distance D is 0.1. The parameters A and D were independently tuned to minimize the 24-h forecast error variance over a 50-day period.
The flow-dependent, ensemble-based covariance matrix, 𝗙f, is
i1520-0493-135-10-3484-e5
where each column of the matrix 𝗫 is proportional to the difference between an ensemble member forecast and the control forecast (not the ensemble mean) at the time at which the observations are to be assimilated. As the perturbations are not taken with respect to the ensemble mean, there is a bias in the estimate of 𝗙f(to, to). However, in a nonlinear system, any ensemble-based Kalman update will introduce a bias, and so the bias we introduce by taking perturbations about the control run as opposed to the ensemble mean is suboptimal but not fatal to our approach. The parameter λ is a rescaling factor, calculated from the previous 5 days’ assimilation cycles such that the trace of 𝗙f at the observing sites added to the trace of 𝗥 equals the squared innovation vector, [y − 𝗛xf(to)]. The 𝗕f matrix is also rescaled such that the trace of the rescaled 𝗕f and the trace of 𝗙f are the same.

2) Pure ensemble Kalman filter

Data assimilation can be done using only ensemble-based error statistics, the 𝗙f matrix alone [setting alpha to 1 in Eq. (3)]. In addition, when using the ensemble Kalman filter for preemptive forecasting, we do not incorporate the 𝗕f matrix into the background error covariance matrix. For both the purposes of data assimilation and preemptive forecasting, for which the ensemble-based covariance matrix must now stand on its own, we further improve it.

In addition to rescaling the magnitudes of the covariances, it is critical to apply a Schur product of the ensemble-based background error covariance matrix, 𝗙f, with a localization function. The function given in Eq. 4.10 of Gaspari and Cohn (1999) serves this purpose. The rescaling function is represented as ρ(to, to), where to is the observation time. Figure 2a shows the localization function for an observation taken at grid point (11, 15).

Recall that while the state relaxed to in the nature run is always the same, the state relaxed to by the agency forecast model varies for each forecast. The state that the agency model relaxes to generally has weaker vorticity gradients than the state relaxed to by the nature run. It is thus necessary to add in a term representing the model error to the background error covariance matrix. This term is symbolized by 𝗤f(to, to) (the specifics of how we calculate 𝗤f can be found in section 2b). The equation for the rescaled and localized ensemble-based background error covariance matrix including a model error term is
i1520-0493-135-10-3484-e6
The parameter γ is a rescaling factor, such that the sum of the traces of (γ)ρ(to, to) · 𝗛𝗫(to)𝗫(to)T𝗛T, 𝗛𝗤f(to, to)𝗛T, and R equals the squared innovation vector, [y − 𝗫xf(to)]. In this approach, the magnitudes of the 𝗤f and 𝗥 matrices are fixed and constant every day, and it is the ensemble-based background error covariance matrix that is rescaled. Testing was done to determine the optimal length scale for the correlation function ρ(to, to). Lengths between 500 and 2000 km were used in ρ(to, to), and 300 days of forecasts were made from EnKF-produced analyses. The average vorticity error of those 300 forecasts was calculated, and the length scale of 1000 km produced the smallest forecast errors.

Note that regardless of whether or not 𝗤f or 𝗕f is used in the Kalman filter when making an analysis, the ensemble-based prediction error covariance matrix, 𝗙f(t1, t2) = λ{[1/K]𝗫(t1)𝗫(t2)T}, is always the same. This results in an inconsistency, as each method of data assimilation should result in the generation of its own unique ensemble, but in our experiments all KFs use the same set of ensemble members, with only one configuration cycling the ensemble.

b. Preemptive forecast theory: The ensemble Kalman filter

A preemptive forecast is made if the signal, xa(to) − xf(to) from Eq. (2), is propagated to some future time, t, and added to the control forecast at that time, xf(t). This is accomplished by operating on both sides of (2) with the true autonomous nonlinear operator M(t, to). Operating on Eq. (2) using M(t, to) yields
i1520-0493-135-10-3484-e7
Substituting the ensemble-based background error covariance matrix from (6) into (7) yields
i1520-0493-135-10-3484-e8
This is the equation to produce the analysis increment using a background error covariance matrix consisting of an ensemble-based covariance matrix localized using the Schur product and a model error covariance matrix.
This increment can be propagated forward in time using the ensemble perturbations valid at any future verification time, tυ. Propagation of 𝗣f(to, to) by the true autonomous nonlinear operator M(tυ, to) results in
i1520-0493-135-10-3484-e9
To propagate 𝗫(to) forward in time, we simply integrate the forecast model from ta to tυ for each of the ensemble members and the control run. To propagate the localization function forward in time, we expand the radius of the localization function (by trying several values and selecting the radius that produces the most accurate forecasts) to account for the movements of error structures in the flow. In the propagation of 𝗣f(to, to) by M(tυ, to), note that both ρ(to, to) and 𝗫(to) 𝗫(to)T are propagated forward in time separately and then combined using a Schur product. If one thinks of the propagator M as a matrix 𝗠, then it looks like 𝗠[ρ(to, to) · 𝗫(to)𝗫(to)T] = [𝗠ρ(to, to)] · [𝗠X(to)X(to)T]. However, the true propagator is nonlinear, and operates on ρ(to, to) and 𝗫(to). If one thinks of the propagator as an operator, it is more like (𝗕 · 𝗖)M = 𝗕M · 𝗖M, with M operating on both matrices independently.

Note that the localization function, ρ(to, to), is not the same as ρ(tυ, to). It is expected that for longer lead times, the distance for which ensemble-based error correlations are valid will increase, as the structures of the error are moving in the flow. It is also possible that the structure of the localization function would be different when the error statistics correlate two different times. However, for this simple experiment, we chose to use the same function for ρ(tυ, to) as for ρ(to, to), but we have increased the length scale to 1400 km. The value of 1400 km was chosen by testing a number of different length scales, and choosing the one that gave the smallest forecast errors. The 1400-km localization for an observation taken at grid point (11, 15) is shown in Fig. 2b.

The other covariance matrix in Eq. (9) is the model error covariance matrix, 𝗤f. In general, model error statistics are not known, but are assumed to be quasi-isotropic. In this simple model experiment, because we know exactly the model error, we were able to calculate a climatological model error covariance matrix. For each day of the nature run, we were able to run the barotropic model twice, once using the same relaxation as the nature run and once using the agency forecast model. Taking the differences between these forecasts yields the model error for each set of ensemble initial conditions. Each difference represents one realization of the model error. By placing each realization in the column of a matrix, and multiplying that matrix of realizations by its transpose, and dividing by the number of cases, an “unobtainable” model error covariance matrix is calculated. We do not localize this error covariance matrix. The barotropic turbulent flow is completely described with the 1024-element state vector. The unobtainable model error covariance matrix comes from a set of 300 samples and, thus, captures the majority of the directions of the error. This is in contrast to the ensemble-based background error covariance matrix, formed using 64 ensemble members, which is more likely to need localization.

Figure 3 shows the correlation of the errors at all model grid points with errors at grid point (11, 15). Figure 3a shows the correlations using the isotropic covariance matrix, while Fig. 3b shows the model error covariance matrix from the unobtainable method. It has been assumed that the model error statistics are isotropic in nature, but at least for this simple barotropic model, where the source of the model error is the vorticity relaxation scheme, this is not the case. Model errors at grid point (11, 15) are positively correlated with areas immediately around this site as well as locations in the same latitude band. Model errors at grid point (11, 15) are negatively correlated with latitude bands farther to the north and south. Given the initial vorticity field used to initialize the model with two bands of positive vorticity positioned in the latitudes typically associated with the ITCZ and the midlatitude storm track, and areas of negative vorticity elsewhere, this structure of model errors agrees well with the dynamics of the system.

The propagated background error covariance matrix, 𝗣f(tυ, to), when incorporated into Eq. (8), and also propagating the left-hand side of (8), then yields
i1520-0493-135-10-3484-e10
This is the equation used to produce the preemptive forecast, x(tυ).

c. Experimental design

Five 24-h model forecasts were made for each day’s analysis. These five 24-h forecasts were compared to the forecast made without assimilating any observations—a 48-h forecast valid at the same time. These six model integrations are compared with three preemptive forecasts.

Figure 4 shows a timeline of the forecast process. At the initial time, ti, 64 initial conditions are generated [using the ETKF spherical simplex scheme from Wang et al. (2004)] and integrated by the forecast model for 48 h to the verification time, tυ. At the observation time, to, 24 h, 72 observations are taken. These observations are used to produce new analyses and also as input for preemptive forecasts. New analyses are made using the five different methods: 1) a hybrid ensemble Kalman filter with no covariance localization; 2) a hybrid ensemble Kalman filter with covariance localization; 3) a pure ensemble Kalman filter, with no covariance localization nor an additive model error term; 4) a pure ensemble Kalman filter with covariance localization, but no additive model error term; and 5) a pure ensemble Kalman filter with covariance localization and an explicit model error term, mentioned in the paragraph above. These five new analyses are integrated an additional 24 h to the verification time. Preemptive forecasts are made using three different covariance matrices, propagating the impact of observations at to to tυ. These three covariance matrices were 1) a pure ensemble Kalman filter, with no covariance localization nor an additive model error term; 2) a pure ensemble Kalman filter with covariance localization, but no additive model error term; and 3) a pure ensemble Kalman filter with covariance localization and an explicit model error term.

Thus, for each of the 300 days of the simulation, there are nine different forecasts valid at the verification time. One is a 48-h forecast. There are five 24-h forecasts, all initialized from different analyses. There are also three preemptive forecasts, which use the data from the observations at to, but do not require any model integrations, using the existing 48-h integrations to produce a forecast. While there are many different sets of error statistics used for data assimilation, it is only the analysis generated using the hybrid EnKF, and the ensemble members generated by adding ETKF generated perturbations to this analysis, that are used for the next forecast cycle. All other analyses and forecasts from those analyses are used for comparative purposes only.

3. Results from the barotropic model experiment

Results, shown in Table 1, compare preemptive forecasts to forecasts made from analyses produced using an EnKF data assimilation scheme and forecasts made from analyses produced from a hybrid data assimilation scheme. The preemptive forecast made using Eq. (10), which includes localization of the ensemble-based covariances and an explicit model error term, had lower average squared forecast errors (5.906 × 10−10 s−2) than did the 48-h forecasts generated from using the hybrid data assimilation scheme (8.718 × 10−10 s−2). The preemptive forecast is an adjustment to the 48-h forecast, rather than a new model integration, and so the forecast improvement resulting from the preemptive forecast does not require the model to be run.

In addition to being better than the 48-h integrations that they modified, these preemptive forecasts were nearly as accurate as the 24-h forecasts generated from using the hybrid data assimilation scheme (5.906 × 10−10 s−2 versus 5.071 × 10−10 s−2). This accuracy of preemptive forecasts was only the case when the error statistics were localized and the model error was included. With no localization or model error term, the average squared errors were 9.136 × 10−10 s−2; when localization was applied, 7.343 × 10−10 s−2; and when localization and model error were used, 5.906 × 10−10 s−2. Localization of the background error covariance matrix resulted in a 20% improvement, and including the model error in the background error covariance matrix produced an additional 20% improvement in the forecast accuracy. Note that if neither model error nor covariance localization are included, the preemptive forecasts (9.136 × 10−10 s−2) are worse than the 48-h forecasts (8.718 × 10−10 s−2), implying that the adjustment to the 48-h forecast was on average a forecast degradation. The rank-deficient background error covariance matrix contains spurious error correlations, and these correlations lead to the forecast degradation if they are not minimized.

Model 24-h forecasts made from analyses generated from the EnKF show the same sensitivity to localization and the inclusion of model error. As shown in Table 1, 24-h forecasts made using the EnKF for data assimilation are only superior to those made using the hybrid for data assimilation when both localization and model error are included. Thus, like the preemptive forecast, the 24-h conventional forecasts were markedly improved when the error statistics were localized using the Schur product and when the model error was included in the background error covariance matrix. With no localization or model error term the average squared errors were 7.734 × 10−10 s−2, when localization was applied they were 5.307 × 10−10 s−2, and they were 4.569 × 10−10 s−2 when localization and model error were used. For the conventional forecasts, localization of the background error covariance matrix improved forecasts 31%, and the inclusion of model error in the background error covariance matrix leads to an additional 14% improvement. Forecasts made from analyses produced by the hybrid EnKF do not show significant improvements resulting from localizing the ensemble-based part of the background error covariance matrix, improving from an average error of 5.071 × 10−10 s−2 to 5.045 × 10−10 s−2.

4. Conclusions

Ensembles can be used for much more than providing confidence in official forecasts, and this work represents only one way in which a probabilistic approach to data assimilation and forecasting can be leveraged. While the 24-h preemptive forecasts are not as accurate as forecasts made by assimilating the data, producing a new analysis, and integrating the forecast model out 24 h, these results suggest that a preemptive forecast can have value to a forecaster. Preemptive forecasts were only 16% less accurate than the baseline 24-h forecasts, whereas the original 48-h forecasts were 59% less accurate. Thus, the preemptive forecasts are markedly better than using the 48-h forecast and are available sooner than a new 24-h model integration based on the same set of observational data.

These preemptive forecasts were produced using what is clearly a suboptimal localization function, ρ(tυ, to). The localization function had the same structure as ρ(to, to), and was different only in that it had a larger cutoff radius. For covariances between two different times, the error structures are likely moving, and thus the localization function should account for this. Perhaps a better localization function for ρ(tυ, to) would result in more accurate preemptive forecasts. However, without any form of localization, the preemptive forecast was no better than the raw 48-h forecast; a clear indication of the importance of error covariance localization to the preemptive forecast.

The inclusion of model error in the background error covariance matrix further increased the accuracy of the preemptive forecasts. The further improvement in preemptive forecasts by including a model error term in the background error covariance matrix (from 7.343 to 5.906, a 20% reduction) is nearly as great as the improvements from localization (from 9.136 to 7.343, a 20% reduction). Interestingly, including a model error term in the EnKF data assimilation scheme does not give the same result, as improvements from localization are larger, at 31%, but the additional improvement from including the model error term, 𝗤, is smaller, being 14%. This result would suggest that the inclusion of the model error in a background error covariance matrix is more important for error statistics correlating two different times than for error statistics valid at one time. Covariances in space allow observations to update model states that are far from the observation location in space, and covariances in time allow observations to update model states that are far from the observation location in time. We speculate that for covariances at the same time, spatial localization is more important, whereas for covariances in time, where the model has additional opportunity to misrepresent the true state, model error has a greater amount of importance. An equally plausible explanation for the more significant improvements of the preemptive forecasts that result from including the model error term is the suboptimal construction of the localization function for error statistics spanning different times.

Results indicate that the inclusion of an additive covariance matrix in the data assimilation scheme improves the accuracy of forecasts. Forecasts made from analyses produced from a hybrid data assimilation scheme that incorporates an isotropic covariance matrix were more accurate (average squared error of 5.045 × 10−10 s−2) than those initialized using an EnKF that does not include any model error term (average squared error of 5.307 × 10−10 s−2). However, hybrid scheme–based forecasts were not as accurate (average squared error of 5.045 × 10−10 s−2) as forecasts made from analyses produced from an EnKF that incorporates the unobtainable model error covariance matrix (average squared error of 4.569 × 10−10 s−2). Thus, while using an isotropic covariance matrix as is done in Hamill and Snyder (2000) and Houtekamer et al. (2005) is clearly better than including no such term, it was not as effective as using the unobtainable model error covariance matrix. This suggests that data assimilation schemes may yield even better analyses and forecasts if nonisotropic covariances are used in the model error covariance matrix.

That the preemptive forecast technique needs covariance localization for the ensemble-based background error covariance matrix and the inclusion of a model error covariance matrix to be effective may have impacts on the use of ensembles in targeting. The National Centers for Environmental Prediction (NCEP), presently uses the ETKF for targeting (Majumdar et al. 2002), and this technique could, perhaps, benefit from the background error covariance matrix localization or the inclusion of a model error covariance matrix. However, it is important to note that in the OSSE presented here, observations of vorticity were used, and vorticity features are far more small scale, and potentially noisy, than observations of height or wind. Thus, the need for covariance localization and an additive model error covariance matrix in operational uses may be overstated in results from the OSSE. It is also important to note that our unobtainable model error covariance matrix 𝗤 used in this OSSE cannot be computed for an operational forecast model, as model error and forecast error cannot be separated for forecasts of the real atmosphere. However, a high-resolution nature run would allow for an investigation of 𝗤 in operational systems [one such option is given in Masutani et al. (2006)].

These experiments are somewhat ideal and, thus, all results should be viewed in this light. In this experiment, there is only one source of model error (the vorticity relaxation scheme); in real systems, there are many more. The ability of the ensemble to represent the uncertainty of our forecast system is also better than one would find in operations. The ratio of the number of ensemble members to degrees of freedom of the model in our experiments is 64/1024 (0.0625). However, localization effectively increases the degrees of freedom of the ensemble and the value 0.0625 is the worst possible ratio of the number degrees of freedom of the ensemble-based background error covariance matrix to the number of degrees of freedom of the state vector. Nonetheless, given the 100-fold increase in the size of the state vector of an operational forecast model, it would take an ensemble of many more than 64 members to effectively describe the uncertainty of that model at the same level that we are able to quantify the uncertainty in our simple forecast model.

Given these caveats, there is benefit to being able to produce a preemptive forecast. The prime such benefit of a preemptive forecasting is the production of a “new forecast” as soon as the analysis data are available. Much as a human forecaster can assess new observational data to improve upon the computer guidance available, the preemptive forecast makes use of observations to adjust a forecast. This results in a “new” forecast being available sooner than a new model run, as no model integration is needed. The rapidity would provide forecasters valuable extra lead time (the length of time it takes for a model integration to be completed would be saved). This could provide valuable time for making critical forecasts, such as in the cases of severe weather or tropical storm track forecasting.

Acknowledgments

Thanks to Craig Bishop, who provided the seed thoughts for the idea of a “preemptive” forecast. Thanks to the attendees of the Workshop on Ensemble Weather Forecasting in the Short to Medium Range of September 2003, held in Val-Morin, Québec, Canada, for their feedback when this work was first presented. The contributions of Jim Hansen and Peter van Leeuwen to this manuscript are gratefully acknowledged.

REFERENCES

  • Bishop, C. H., , and Z. Toth, 1999: Ensemble transformation and adaptive observations. J. Atmos. Sci., 56 , 17481765.

  • Bishop, C. H., , B. J. Etherton, , and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129 , 420436.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., , B. J. Etherton, , and S. J. Majumdar, 2006: Conditioned verification region selection in adaptive sampling. Quart. J. Roy. Meteor. Soc., 132 , 915934.

    • Search Google Scholar
    • Export Citation
  • Burgers, G., , P. J. van Leeuwen, , and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126 , 17191724.

  • Cohn, S. E., 1997: An introduction to estimation theory. J. Meteor. Soc. Japan, 75 , 257288.

  • Dee, D. P., 1995: On-line estimation of error covariance parameters for atmospheric data assimilation. Mon. Wea. Rev., 123 , 11281145.

    • Search Google Scholar
    • Export Citation
  • Etherton, B. J., , and C. H. Bishop, 2004: Resilience of hybrid ensemble/3DVAR analysis schemes to model error and ensemble covariance error. Mon. Wea. Rev., 132 , 10651080.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 2003: The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dyn., 53 , 343367.

  • Evensen, G., , and P. J. van Leeuwen, 2000: An ensemble Kalman smoother for nonlinear dynamics. Mon. Wea. Rev., 128 , 18521867.

  • Gaspari, G., , and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125 , 723757.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128 , 29052919.

  • Hamill, T. M., , and C. Snyder, 2002: Using improved background-error covariances from an ensemble Kalman filter for adaptive observations. Mon. Wea. Rev., 130 , 15521572.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , J. S. Whitaker, , and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129 , 27762790.

    • Search Google Scholar
    • Export Citation
  • Hansen, J. A., , and A. L. Smith, 2000: The role of operational constraints in selecting supplementary observations. J. Atmos. Sci., 57 , 28592871.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., 2001: A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev., 129 , 123137.

  • Houtekamer, P. L., , and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126 , 796811.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., , H. L. Mitchell, , G. Pellerin, , M. Buehner, , M. Charron, , L. Spacek, , and B. Hansen, 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations. Mon. Wea. Rev., 133 , 604620.

    • Search Google Scholar
    • Export Citation
  • Hunt, B. R., and Coauthors, 2004: Four-dimensional ensemble Kalman filtering. Tellus, 56A , 273277.

  • Kalman, R. E., 1960: A new approach to linear filtering and prediction problems. Trans. Amer. Soc. Mech. Eng., J. Basic Eng., 82 , 3545.

    • Search Google Scholar
    • Export Citation
  • Kalman, R. E., , and R. S. Bucy, 1961: New results in linear filtering and prediction theory. Trans. Amer. Soc. Mech. Eng., J. Basic Eng., 83 , 95107.

    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., , C. M. Kishtawal, , T. E. LaRow, , D. R. Bachiochi, , Z. Zhang, , C. E. Williford, , S. Gadgil, , and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285 , 15481550.

    • Search Google Scholar
    • Export Citation
  • Lorenz, E. N., , and K. A. Emanuel, 1998: Optimal sites for supplementary observations: Simulation with a small model. J. Atmos. Sci., 55 , 399414.

    • Search Google Scholar
    • Export Citation
  • Majumdar, S. J., , C. H. Bishop, , B. J. Etherton, , and Z. Toth, 2002: Adaptive sampling with the ensemble transform Kalman filter. Part II: Field program implementation. Mon. Wea. Rev., 130 , 13561369.

    • Search Google Scholar
    • Export Citation
  • Masutani, M., and Coauthors, 2006: Observing system simulation experiments at NCEP. NCEP Office Note 451, 34 pp. [Available online at http://www.emc.ncep.noaa.gov/research/osse/NR/references/Masutani.2006.on451.pdf.].

  • Morss, R. E., , K. A. Emauel, , and C. Snyder, 2001: Idealized adaptive observation strategies for improving numerical weather prediction. J. Atmos. Sci., 58 , 210232.

    • Search Google Scholar
    • Export Citation
  • Roulston, M. S., , and L. A. Smith, 2003: Combining dynamical and statistical ensembles. Tellus, 55A , 1630.

  • Stensrud, D. J., , J-W. Bao, , and T. T. Warner, 2000: Using initial condition and model physics perturbations in short-range ensemble simulations of mesoscale convective systems. Mon. Wea. Rev., 128 , 20772107.

    • Search Google Scholar
    • Export Citation
  • Tippett, M. K., , J. L. Anderson, , C. H. Bishop, , T. M. Hamill, , and J. S. Whitaker, 2003: Ensemble square-root filters. Mon. Wea. Rev., 131 , 14851490.

    • Search Google Scholar
    • Export Citation
  • Toth, Z., , and E. Kalnay, 1993: Ensemble forecasting at NMC: The generation of perturbations. Bull. Amer. Meteor. Soc., 74 , 23172330.

  • Wang, X., , C. H. Bishop, , and S. J. Julier, 2004: Which is better, an ensemble of positive–negative pairs or a centered spherical simplex ensemble? Mon. Wea. Rev., 132 , 15901605.

    • Search Google Scholar
    • Export Citation
Fig. 1.
Fig. 1.

A representative vorticity field (contours) and locations of the routine observation network sites (solid dots) for the OSSE. Vorticity values are in units of 10−5 s−1.

Citation: Monthly Weather Review 135, 10; 10.1175/MWR3480.1

Fig. 2.
Fig. 2.

Localization function for a grid point (11, 15) using (a) a 1000-km cutoff radius or (b) a 1400-km cutoff radius. The localization function is Eq. (4.10) of Gaspari and Cohn (1999).

Citation: Monthly Weather Review 135, 10; 10.1175/MWR3480.1

Fig. 3.
Fig. 3.

Model error correlations for a grid point (11, 15), as computed from (a) an isotropic covariance matrix and (b) from an explicitly calculated covariance matrix.

Citation: Monthly Weather Review 135, 10; 10.1175/MWR3480.1

Fig. 4.
Fig. 4.

A timeline of the experimental design. At the initial time, ti, 64 initial conditions and a control run are initialized and integrated by the forecast model 48 h to the verification time, tυ. At the observation time, to, 24 h, 72 observations are taken. These observations are used to produce new analyses and also used as input for preemptive forecasts. New analyses are made with a Kalman filter using one of five different background error covariance matrices. These new analyses are integrated an additional 24 h to the verification time. Preemptive forecasts are made using one of three different covariance matrices, propagating the impact of observations from to to tυ. Thick arrows represent 24-h model integrations of a single forecast. Triple arrows represent integrations of the control run and the 64-member ensemble.

Citation: Monthly Weather Review 135, 10; 10.1175/MWR3480.1

Table 1.

Daily domain-averaged squared vorticity error for 300 forecasts. Error is in units of (10−10 s−2).

Table 1.
Save