Search Results

You are looking at 11 - 20 of 66 items for

  • Author or Editor: Jeffrey Anderson x
  • Refine by Access: All Content x
Clear All Modify Search
Jeffrey L. Anderson

Abstract

An extremely simple chaotic model, the three-variable Lorenz convective model, is used in a perfect model setting to study the selection of initial conditions for ensemble forecasts. Observations with a known distribution of error are sampled from the “climate” of the simple model. Initial condition distributions that use only information about the observation and the observational error distribution (i.e., traditional Monte Carlo methods) are shown to differ from the correct initial condition distributions, which make use of additional information about the local structure of the model's attractor. Three relatively inexpensive algorithms for finding the local attractor structure in a simple model are examined; these make use of singular vectors. normal modes, and perturbed integrations. All of these are related to heuristic algorithms that have been applied to select ensemble members in operational forecast models. The method of perturbed integrations, which is somewhat similar to the “breeding” method used at the National Meteorological Center, is shown to be the most effective in this context. Validating the extension of such methods to realistic models is expected to be extremely difficult; however, it seems reasonable that utilizing all available information about the attractor structure of real forecast models when selecting ensemble initial conditions could improve the success of operational ensemble forecasts.

Full access
Jeffrey L. Anderson

Abstract

The binned probability ensemble (BPE) technique is presented as a method for producing forecasts of the probability distribution of a variable using an ensemble of numerical model integrations. The ensemble forecasts are used to partition the real line into a number of bins, each of which has an equal probability of containing the “true” forecast. The method is tested for both a simple low-order dynamical system and a general circulation model (GCM) forced with observed sea surface temperatures (an ensemble of Atmospheric Model Intercomparison Project integrations). The BPE method can also be used to calculate the probability that probabilistic ensemble forecasts are consistent with the verifying observations. The method is not sensitive to the fact that the characteristics of the forecast probability distribution may change drastically for different initial condition (or boundary condition) probability distributions. For example, the method is capable of evaluating whether the variance of a set of ensemble forecasts is consistent with the verifying observed variance. Applying the method to the ensemble of boundary-forced GCM integrations demonstrates that the GCM produces probabilistic forecasts with too little variability for upper-level dynamical fields. Operational weather prediction centers including the U.K. Meteorological Office, the European Centre for Medium-Range Forecasts, and the National Centers for Environmental Prediction have been applying this method, referred to by them as Talagrand diagrams, to the verification of operational ensemble predictions. The BPE method only evaluates the consistency of ensemble predictions and observations and should be used in conjunction with additional verification tools to provide a complete assessment of a set of probabilistic forecasts.

Full access
Jeffrey L. Anderson

Abstract

Ensemble Kalman filters use the sample covariance of an observation and a model state variable to update a prior estimate of the state variable. The sample covariance can be suboptimal as a result of small ensemble size, model error, model nonlinearity, and other factors. The most common algorithms for dealing with these deficiencies are inflation and covariance localization. A statistical model of errors in ensemble Kalman filter sample covariances is described and leads to an algorithm that reduces ensemble filter root-mean-square error for some applications. This sampling error correction algorithm uses prior information about the distribution of the correlation between an observation and a state variable. Offline Monte Carlo simulation is used to build a lookup table that contains a correction factor between 0 and 1 depending on the ensemble size and the ensemble sample correlation. Correction factors are applied like a traditional localization for each pair of observations and state variables during an ensemble assimilation. The algorithm is applied to two low-order models and reduces the sensitivity of the ensemble assimilation error to the strength of traditional localization. When tested in perfect model experiments in a larger model, the dynamical core of a general circulation model, the sampling error correction algorithm produces analyses that are closer to the truth and also reduces sensitivity to traditional localization strength.

Full access
Jeffrey L. Anderson

Abstract

A deterministic square root ensemble Kalman filter and a stochastic perturbed observation ensemble Kalman filter are used for data assimilation in both linear and nonlinear single variable dynamical systems. For the linear system, the deterministic filter is simply a method for computing the Kalman filter and is optimal while the stochastic filter has suboptimal performance due to sampling error. For the nonlinear system, the deterministic filter has increasing error as ensemble size increases because all ensemble members but one become tightly clustered. In this case, the stochastic filter performs better for sufficiently large ensembles. A new method for computing ensemble increments in observation space is proposed that does not suffer from the pathological behavior of the deterministic filter while avoiding much of the sampling error of the stochastic filter. This filter uses the order statistics of the prior observation space ensemble to create an approximate continuous prior probability distribution in a fashion analogous to the use of rank histograms for ensemble forecast evaluation. This rank histogram filter can represent non-Gaussian observation space priors and posteriors and is shown to be competitive with existing filters for problems as large as global numerical weather prediction. The ability to represent non-Gaussian distributions is useful for a variety of applications such as convective-scale assimilation and assimilation of bounded quantities such as relative humidity.

Full access
Jeffrey L. Anderson

Abstract

Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of methods including these ensemble Kalman filter methods is derived starting from the nonlinear filtering problem. When working in a joint state–observation space, many features of ensemble filtering algorithms are easier to derive and compare. The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables. In this context, the update procedure applied when a new observation becomes available can be described in two parts. First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter. Second, a linear regression of the prior ensemble sample of each state variable on the observation variable is performed to compute update increments for each state variable ensemble member from corresponding observation variable increments. The regression can be applied globally or locally using Gaussian kernel methods.

Several previously documented ensemble Kalman filter methods, the perturbed observation ensemble Kalman filter and ensemble adjustment Kalman filter, are developed in this context. Some new ensemble filters that extend beyond the Kalman filter context are also discussed. The two-part method can provide a computationally efficient implementation of ensemble filters and allows more straightforward comparison of methods since they differ only in the solution of a scalar filtering problem.

Full access
Jeffrey L. Anderson and Stephen L. Anderson

Abstract

Knowledge of the probability distribution of initial conditions is central to almost all practical studies of predictability and to improvements in stochastic prediction of the atmosphere. Traditionally, data assimilation for atmospheric predictability or prediction experiments has attempted to find a single “best” estimate of the initial state. Additional information about the initial condition probability distribution is then obtained primarily through heuristic techniques that attempt to generate representative perturbations around the best estimate. However, a classical theory for generating an estimate of the complete probability distribution of an initial state given a set of observations exists. This nonlinear filtering theory can be applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales. A Monte Carlo implementation of the fully nonlinear filter has been developed and applied to several low-order models. The method is able to produce assimilations with small ensemble mean errors while also providing random samples of the initial condition probability distribution. The Monte Carlo method can be applied in models that traditionally require the application of initialization techniques without any explicit initialization. Initial application to larger models is promising, but a number of challenges remain before the method can be extended to large realistic forecast models.

Full access
Mahsa Mirzargar and Jeffrey L. Anderson

Abstract

Various generalizations of the univariate rank histogram have been proposed to inspect the reliability of an ensemble forecast or analysis in multidimensional spaces. Multivariate rank histograms provide insightful information about the misspecification of genuinely multivariate features such as the correlation between various variables in a multivariate ensemble. However, the interpretation of patterns in a multivariate rank histogram should be handled with care. The purpose of this paper is to focus on multivariate rank histograms designed based on the concept of data depth and outline some important considerations that should be accounted for when using such multivariate rank histograms. To generate correct multivariate rank histograms using the concept of data depth, the datatype of the ensemble should be taken into account to define a proper preranking function. This paper demonstrates how and why some preranking functions might not be suitable for multivariate or vector-valued ensembles and proposes preranking functions based on the concept of simplicial depth that are applicable to both multivariate points and vector-valued ensembles. In addition, there exists an inherent identifiability issue associated with center-outward preranking functions used to generate multivariate rank histograms. This problem can be alleviated by complementing the multivariate rank histogram with other well-known multivariate statistical inference tools based on rank statistics such as the depth-versus-depth (DD) plot. Using a synthetic example, it is shown that the DD plot is less sensitive to sample size compared to multivariate rank histograms.

Full access
Sukyoung Lee and Jeffrey L. Anderson

Abstract

A forced, nonlinear barotropic model on the sphere is shown to simulate some of the structure of the observed Northern Hemisphere midlatitude storm tracks with reasonable accuracy. For the parameter range chosen, the model has no unstable modes with significant amplitude in the storm track regions; however, several decaying modes with structures similar to the storm track are discovered. The model's midlatitude storm tracks also coincide with the location of a waveguide that is obtained by assuming that the horizontal variation of the time-mean flow is small compared with the scale of the transient eddies. Since the model is able to mimic the behavior of the observed storm tracks without any baroclinic dynamics, it is argued that the barotropic waveguide effects of the time-mean background flow acting on individual eddies are partially responsible for the observed storm track structure.

Full access
Jonathan Poterjoy and Jeffrey L. Anderson

Abstract

This study presents the first application of a localized particle filter (PF) for data assimilation in a high-dimensional geophysical model. Particle filters form Monte Carlo approximations of model probability densities conditioned on observations, while making no assumptions about the underlying error distribution. Unlike standard PFs, the local PF uses a localization function to reduce the influence of distant observations on state variables, which significantly decreases the number of particles required to maintain the filter’s stability. Because the local PF operates effectively using small numbers of particles, it provides a possible alternative to Gaussian filters, such as ensemble Kalman filters, for large geophysical models. In the current study, the local PF is compared with stochastic and deterministic ensemble Kalman filters using a simplified atmospheric general circulation model. The local PF is found to provide stable filtering results over yearlong data assimilation experiments using only 25 particles. The local PF also outperforms the Gaussian filters when observation networks include measurements that have non-Gaussian errors or relate nonlinearly to the model state, like remotely sensed data used frequently in atmospheric analyses. Results from this study encourage further testing of the local PF on more complex geophysical systems, such as weather prediction models.

Full access
Lili Lei and Jeffrey L. Anderson

Abstract

Two techniques for estimating good localization functions for serial ensemble Kalman filters are compared in observing system simulation experiments (OSSEs) conducted with the dynamical core of an atmospheric general circulation model. The first technique, the global group filter (GGF), minimizes the root-mean-square (RMS) difference between the estimated regression coefficients using a hierarchical ensemble filter. The second, the empirical localization function (ELF), minimizes the RMS difference between the true values of the state variables and the posterior ensemble mean. Both techniques provide an estimate of the localization function for an observation’s impact on a state variable with few a priori assumptions about the localization function. The ELF localizations can have values larger than 1.0 at small distances, indicating that this technique addresses localization but also can correct the prior ensemble spread in the same way as a variance inflation when needed. OSSEs using ELF localizations generally have smaller root-mean-square error (RMSE) than the optimal Gaspari and Cohn (GC) localization function obtained by empirically tuning the GC width. The localization functions estimated by the GGF are broader than those from the ELF, and the OSSEs with the GGF localization generally have larger RMSE than the optimal GC localization function. The GGFs are too broad because of spurious correlation biases that occur in the OSSEs. These errors can be reduced by using a stochastic EnKF with perturbed observations instead of a deterministic EAKF.

Full access