# Search Results

## You are looking at 11 - 20 of 68 items for

- Author or Editor: Jeffrey Anderson x

- Refine by Access: All Content x

## Abstract

An extension to standard ensemble Kalman filter algorithms that can improve performance for non-Gaussian prior distributions, non-Gaussian likelihoods, and bounded state variables is described. The algorithm exploits the capability of the rank histogram filter (RHF) to represent arbitrary prior distributions for observed variables. The rank histogram algorithm can be applied directly to state variables to produce posterior marginal ensembles without the need for regression that is part of standard ensemble filters. These marginals are used to adjust the marginals obtained from a standard ensemble filter that uses regression to update state variables. The final posterior ensemble is obtained by doing an ordered replacement of the posterior marginal ensemble values from a standard ensemble filter with the values obtained from the rank histogram method applied directly to state variables; the algorithm is referred to as the marginal adjustment rank histogram filter (MARHF). Applications to idealized bivariate problems and low-order dynamical systems show that the MARHF can produce better results than standard ensemble methods for priors that are non-Gaussian. Like the original RHF, the MARHF can also make use of arbitrary non-Gaussian observation likelihoods. The MARHF also has advantages for problems with bounded state variables, for instance, the concentration of an atmospheric tracer. Bounds can be automatically respected in the posterior ensembles. With an efficient implementation of the MARHF, the additional cost has better scaling than the standard RHF.

## Abstract

An extension to standard ensemble Kalman filter algorithms that can improve performance for non-Gaussian prior distributions, non-Gaussian likelihoods, and bounded state variables is described. The algorithm exploits the capability of the rank histogram filter (RHF) to represent arbitrary prior distributions for observed variables. The rank histogram algorithm can be applied directly to state variables to produce posterior marginal ensembles without the need for regression that is part of standard ensemble filters. These marginals are used to adjust the marginals obtained from a standard ensemble filter that uses regression to update state variables. The final posterior ensemble is obtained by doing an ordered replacement of the posterior marginal ensemble values from a standard ensemble filter with the values obtained from the rank histogram method applied directly to state variables; the algorithm is referred to as the marginal adjustment rank histogram filter (MARHF). Applications to idealized bivariate problems and low-order dynamical systems show that the MARHF can produce better results than standard ensemble methods for priors that are non-Gaussian. Like the original RHF, the MARHF can also make use of arbitrary non-Gaussian observation likelihoods. The MARHF also has advantages for problems with bounded state variables, for instance, the concentration of an atmospheric tracer. Bounds can be automatically respected in the posterior ensembles. With an efficient implementation of the MARHF, the additional cost has better scaling than the standard RHF.

## Abstract

It is possible to describe many variants of ensemble Kalman filters without loss of generality as the impact of a single observation on a single state variable. For most ensemble algorithms commonly applied to Earth system models, the computation of increments for the observation variable ensemble can be treated as a separate step from computing increments for the state variable ensemble. The state variable increments are normally computed from the observation increments by linear regression using the prior bivariate ensemble of the state and observation variable. Here, a new method that replaces the standard regression with a regression using the bivariate rank statistics is described. This rank regression is expected to be most effective when the relation between a state variable and an observation is nonlinear. The performance of standard versus rank regression is compared for both linear and nonlinear forward operators (also known as observation operators) using a low-order model. Rank regression in combination with a rank histogram filter in observation space produces better analyses than standard regression for cases with nonlinear forward operators and relatively large analysis error. Standard regression, in combination with either a rank histogram filter or an ensemble Kalman filter in observation space, produces the best results in other situations.

## Abstract

It is possible to describe many variants of ensemble Kalman filters without loss of generality as the impact of a single observation on a single state variable. For most ensemble algorithms commonly applied to Earth system models, the computation of increments for the observation variable ensemble can be treated as a separate step from computing increments for the state variable ensemble. The state variable increments are normally computed from the observation increments by linear regression using the prior bivariate ensemble of the state and observation variable. Here, a new method that replaces the standard regression with a regression using the bivariate rank statistics is described. This rank regression is expected to be most effective when the relation between a state variable and an observation is nonlinear. The performance of standard versus rank regression is compared for both linear and nonlinear forward operators (also known as observation operators) using a low-order model. Rank regression in combination with a rank histogram filter in observation space produces better analyses than standard regression for cases with nonlinear forward operators and relatively large analysis error. Standard regression, in combination with either a rank histogram filter or an ensemble Kalman filter in observation space, produces the best results in other situations.

## Abstract

A robust algorithm, capable of finding nearly stationary solutions of the unforced barotropic vorticity equation near to observed atmospheric streamfunctions, is presented. When applied to observed persistent anomaly patterns, the nearly stationary states (NSSs) produced by the algorithm usually have a distinctive appearance. NSSs produced for observed blocks tend to have even stronger blocks, and NSSs for intense jet anomaly patterns have intense jets. When applied to observed patterns that are not associated with persistent anomalies, the algorithm produces low-amplitude relatively zonal NSSs. The blocking and intense jet anomaly NSSs bear a striking resemblance to previously derived analytic stationary solutions of the vorticity equation. In particular, NSS blocking states are similar to certain types of modons.The algorithm is applied to a number of modified observed flows to better document what features of an observed pattern determine the nature of the resulting NSS. The short-wave components of an observed pattern need not be present in order for the algorithm to find interesting zonally varying NSSs. However, short waves play an essential part in the resulting NSSs by balancing the long-wave time tendencies. All the NSSs discovered are unstable to the introduction of small perturbations in the barotropic vorticity equation. Despite this instability, the NSSs still persist for many days when integrated in time. The existence of these persistent NSSs may play a significant role in the appearance and subsequent longevity of persistent anomaly patterns in the atmosphere.

## Abstract

A robust algorithm, capable of finding nearly stationary solutions of the unforced barotropic vorticity equation near to observed atmospheric streamfunctions, is presented. When applied to observed persistent anomaly patterns, the nearly stationary states (NSSs) produced by the algorithm usually have a distinctive appearance. NSSs produced for observed blocks tend to have even stronger blocks, and NSSs for intense jet anomaly patterns have intense jets. When applied to observed patterns that are not associated with persistent anomalies, the algorithm produces low-amplitude relatively zonal NSSs. The blocking and intense jet anomaly NSSs bear a striking resemblance to previously derived analytic stationary solutions of the vorticity equation. In particular, NSS blocking states are similar to certain types of modons.The algorithm is applied to a number of modified observed flows to better document what features of an observed pattern determine the nature of the resulting NSS. The short-wave components of an observed pattern need not be present in order for the algorithm to find interesting zonally varying NSSs. However, short waves play an essential part in the resulting NSSs by balancing the long-wave time tendencies. All the NSSs discovered are unstable to the introduction of small perturbations in the barotropic vorticity equation. Despite this instability, the NSSs still persist for many days when integrated in time. The existence of these persistent NSSs may play a significant role in the appearance and subsequent longevity of persistent anomaly patterns in the atmosphere.

## Abstract

Ensemble Kalman filters use the sample covariance of an observation and a model state variable to update a prior estimate of the state variable. The sample covariance can be suboptimal as a result of small ensemble size, model error, model nonlinearity, and other factors. The most common algorithms for dealing with these deficiencies are inflation and covariance localization. A statistical model of errors in ensemble Kalman filter sample covariances is described and leads to an algorithm that reduces ensemble filter root-mean-square error for some applications. This sampling error correction algorithm uses prior information about the distribution of the correlation between an observation and a state variable. Offline Monte Carlo simulation is used to build a lookup table that contains a correction factor between 0 and 1 depending on the ensemble size and the ensemble sample correlation. Correction factors are applied like a traditional localization for each pair of observations and state variables during an ensemble assimilation. The algorithm is applied to two low-order models and reduces the sensitivity of the ensemble assimilation error to the strength of traditional localization. When tested in perfect model experiments in a larger model, the dynamical core of a general circulation model, the sampling error correction algorithm produces analyses that are closer to the truth and also reduces sensitivity to traditional localization strength.

## Abstract

Ensemble Kalman filters use the sample covariance of an observation and a model state variable to update a prior estimate of the state variable. The sample covariance can be suboptimal as a result of small ensemble size, model error, model nonlinearity, and other factors. The most common algorithms for dealing with these deficiencies are inflation and covariance localization. A statistical model of errors in ensemble Kalman filter sample covariances is described and leads to an algorithm that reduces ensemble filter root-mean-square error for some applications. This sampling error correction algorithm uses prior information about the distribution of the correlation between an observation and a state variable. Offline Monte Carlo simulation is used to build a lookup table that contains a correction factor between 0 and 1 depending on the ensemble size and the ensemble sample correlation. Correction factors are applied like a traditional localization for each pair of observations and state variables during an ensemble assimilation. The algorithm is applied to two low-order models and reduces the sensitivity of the ensemble assimilation error to the strength of traditional localization. When tested in perfect model experiments in a larger model, the dynamical core of a general circulation model, the sampling error correction algorithm produces analyses that are closer to the truth and also reduces sensitivity to traditional localization strength.

## Abstract

A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear filtering theory unifies the data assimilation and ensemble generation problem that have been key foci of prediction and predictability research for numerical weather and ocean prediction applications. A new algorithm, referred to as an ensemble adjustment Kalman filter, and the more traditional implementation of the ensemble Kalman filter in which “perturbed observations” are used, are derived as Monte Carlo approximations to the nonlinear filter. Both ensemble Kalman filter methods produce assimilations with small ensemble mean errors while providing reasonable measures of uncertainty in the assimilated variables. The ensemble methods can assimilate observations with a nonlinear relation to model state variables and can also use observations to estimate the value of imprecisely known model parameters. These ensemble filter methods are shown to have significant advantages over four-dimensional variational assimilation in low-order models and scale easily to much larger applications. Heuristic modifications to the filtering algorithms allow them to be applied efficiently to very large models by sequentially processing observations and computing the impact of each observation on each state variable in an independent calculation. The ensemble adjustment Kalman filter is applied to a nondivergent barotropic model on the sphere to demonstrate the capabilities of the filters in models with state spaces that are much larger than the ensemble size.

When observations are assimilated in the traditional ensemble Kalman filter, the resulting updated ensemble has a mean that is consistent with the value given by filtering theory, but only the expected value of the covariance of the updated ensemble is consistent with the theory. The ensemble adjustment Kalman filter computes a linear operator that is applied to the prior ensemble estimate of the state, resulting in an updated ensemble whose mean and also covariance are consistent with the theory. In the cases compared here, the ensemble adjustment Kalman filter performs significantly better than the traditional ensemble Kalman filter, apparently because noise introduced into the assimilated ensemble through perturbed observations in the traditional filter limits its relative performance. This superior performance may not occur for all problems and is expected to be most notable for small ensembles. Still, the results suggest that careful study of the capabilities of different varieties of ensemble Kalman filters is appropriate when exploring new applications.

## Abstract

A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear filtering theory unifies the data assimilation and ensemble generation problem that have been key foci of prediction and predictability research for numerical weather and ocean prediction applications. A new algorithm, referred to as an ensemble adjustment Kalman filter, and the more traditional implementation of the ensemble Kalman filter in which “perturbed observations” are used, are derived as Monte Carlo approximations to the nonlinear filter. Both ensemble Kalman filter methods produce assimilations with small ensemble mean errors while providing reasonable measures of uncertainty in the assimilated variables. The ensemble methods can assimilate observations with a nonlinear relation to model state variables and can also use observations to estimate the value of imprecisely known model parameters. These ensemble filter methods are shown to have significant advantages over four-dimensional variational assimilation in low-order models and scale easily to much larger applications. Heuristic modifications to the filtering algorithms allow them to be applied efficiently to very large models by sequentially processing observations and computing the impact of each observation on each state variable in an independent calculation. The ensemble adjustment Kalman filter is applied to a nondivergent barotropic model on the sphere to demonstrate the capabilities of the filters in models with state spaces that are much larger than the ensemble size.

When observations are assimilated in the traditional ensemble Kalman filter, the resulting updated ensemble has a mean that is consistent with the value given by filtering theory, but only the expected value of the covariance of the updated ensemble is consistent with the theory. The ensemble adjustment Kalman filter computes a linear operator that is applied to the prior ensemble estimate of the state, resulting in an updated ensemble whose mean and also covariance are consistent with the theory. In the cases compared here, the ensemble adjustment Kalman filter performs significantly better than the traditional ensemble Kalman filter, apparently because noise introduced into the assimilated ensemble through perturbed observations in the traditional filter limits its relative performance. This superior performance may not occur for all problems and is expected to be most notable for small ensembles. Still, the results suggest that careful study of the capabilities of different varieties of ensemble Kalman filters is appropriate when exploring new applications.

## Abstract

The binned probability ensemble (BPE) technique is presented as a method for producing forecasts of the probability distribution of a variable using an ensemble of numerical model integrations. The ensemble forecasts are used to partition the real line into a number of bins, each of which has an equal probability of containing the “true” forecast. The method is tested for both a simple low-order dynamical system and a general circulation model (GCM) forced with observed sea surface temperatures (an ensemble of Atmospheric Model Intercomparison Project integrations). The BPE method can also be used to calculate the probability that probabilistic ensemble forecasts are consistent with the verifying observations. The method is not sensitive to the fact that the characteristics of the forecast probability distribution may change drastically for different initial condition (or boundary condition) probability distributions. For example, the method is capable of evaluating whether the variance of a set of ensemble forecasts is consistent with the verifying observed variance. Applying the method to the ensemble of boundary-forced GCM integrations demonstrates that the GCM produces probabilistic forecasts with too little variability for upper-level dynamical fields. Operational weather prediction centers including the U.K. Meteorological Office, the European Centre for Medium-Range Forecasts, and the National Centers for Environmental Prediction have been applying this method, referred to by them as Talagrand diagrams, to the verification of operational ensemble predictions. The BPE method only evaluates the consistency of ensemble predictions and observations and should be used in conjunction with additional verification tools to provide a complete assessment of a set of probabilistic forecasts.

## Abstract

The binned probability ensemble (BPE) technique is presented as a method for producing forecasts of the probability distribution of a variable using an ensemble of numerical model integrations. The ensemble forecasts are used to partition the real line into a number of bins, each of which has an equal probability of containing the “true” forecast. The method is tested for both a simple low-order dynamical system and a general circulation model (GCM) forced with observed sea surface temperatures (an ensemble of Atmospheric Model Intercomparison Project integrations). The BPE method can also be used to calculate the probability that probabilistic ensemble forecasts are consistent with the verifying observations. The method is not sensitive to the fact that the characteristics of the forecast probability distribution may change drastically for different initial condition (or boundary condition) probability distributions. For example, the method is capable of evaluating whether the variance of a set of ensemble forecasts is consistent with the verifying observed variance. Applying the method to the ensemble of boundary-forced GCM integrations demonstrates that the GCM produces probabilistic forecasts with too little variability for upper-level dynamical fields. Operational weather prediction centers including the U.K. Meteorological Office, the European Centre for Medium-Range Forecasts, and the National Centers for Environmental Prediction have been applying this method, referred to by them as Talagrand diagrams, to the verification of operational ensemble predictions. The BPE method only evaluates the consistency of ensemble predictions and observations and should be used in conjunction with additional verification tools to provide a complete assessment of a set of probabilistic forecasts.

## Abstract

Knowledge of the probability distribution of initial conditions is central to almost all practical studies of predictability and to improvements in stochastic prediction of the atmosphere. Traditionally, data assimilation for atmospheric predictability or prediction experiments has attempted to find a single “best” estimate of the initial state. Additional information about the initial condition probability distribution is then obtained primarily through heuristic techniques that attempt to generate representative perturbations around the best estimate. However, a classical theory for generating an estimate of the complete probability distribution of an initial state given a set of observations exists. This nonlinear filtering theory can be applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales. A Monte Carlo implementation of the fully nonlinear filter has been developed and applied to several low-order models. The method is able to produce assimilations with small ensemble mean errors while also providing random samples of the initial condition probability distribution. The Monte Carlo method can be applied in models that traditionally require the application of initialization techniques without any explicit initialization. Initial application to larger models is promising, but a number of challenges remain before the method can be extended to large realistic forecast models.

## Abstract

Knowledge of the probability distribution of initial conditions is central to almost all practical studies of predictability and to improvements in stochastic prediction of the atmosphere. Traditionally, data assimilation for atmospheric predictability or prediction experiments has attempted to find a single “best” estimate of the initial state. Additional information about the initial condition probability distribution is then obtained primarily through heuristic techniques that attempt to generate representative perturbations around the best estimate. However, a classical theory for generating an estimate of the complete probability distribution of an initial state given a set of observations exists. This nonlinear filtering theory can be applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales. A Monte Carlo implementation of the fully nonlinear filter has been developed and applied to several low-order models. The method is able to produce assimilations with small ensemble mean errors while also providing random samples of the initial condition probability distribution. The Monte Carlo method can be applied in models that traditionally require the application of initialization techniques without any explicit initialization. Initial application to larger models is promising, but a number of challenges remain before the method can be extended to large realistic forecast models.

## Abstract

This study presents the first application of a localized particle filter (PF) for data assimilation in a high-dimensional geophysical model. Particle filters form Monte Carlo approximations of model probability densities conditioned on observations, while making no assumptions about the underlying error distribution. Unlike standard PFs, the local PF uses a localization function to reduce the influence of distant observations on state variables, which significantly decreases the number of particles required to maintain the filter’s stability. Because the local PF operates effectively using small numbers of particles, it provides a possible alternative to Gaussian filters, such as ensemble Kalman filters, for large geophysical models. In the current study, the local PF is compared with stochastic and deterministic ensemble Kalman filters using a simplified atmospheric general circulation model. The local PF is found to provide stable filtering results over yearlong data assimilation experiments using only 25 particles. The local PF also outperforms the Gaussian filters when observation networks include measurements that have non-Gaussian errors or relate nonlinearly to the model state, like remotely sensed data used frequently in atmospheric analyses. Results from this study encourage further testing of the local PF on more complex geophysical systems, such as weather prediction models.

## Abstract

This study presents the first application of a localized particle filter (PF) for data assimilation in a high-dimensional geophysical model. Particle filters form Monte Carlo approximations of model probability densities conditioned on observations, while making no assumptions about the underlying error distribution. Unlike standard PFs, the local PF uses a localization function to reduce the influence of distant observations on state variables, which significantly decreases the number of particles required to maintain the filter’s stability. Because the local PF operates effectively using small numbers of particles, it provides a possible alternative to Gaussian filters, such as ensemble Kalman filters, for large geophysical models. In the current study, the local PF is compared with stochastic and deterministic ensemble Kalman filters using a simplified atmospheric general circulation model. The local PF is found to provide stable filtering results over yearlong data assimilation experiments using only 25 particles. The local PF also outperforms the Gaussian filters when observation networks include measurements that have non-Gaussian errors or relate nonlinearly to the model state, like remotely sensed data used frequently in atmospheric analyses. Results from this study encourage further testing of the local PF on more complex geophysical systems, such as weather prediction models.

## Abstract

A variant of a least squares ensemble (Kalman) filter that is suitable for implementation on parallel architectures is presented. This parallel ensemble filter produces results that are identical to those from sequential algorithms already described in the literature when forward observation operators that relate the model state vector to the expected value of observations are linear (although actual results may differ due to floating point arithmetic round-off error). For nonlinear forward observation operators, the sequential and parallel algorithms solve different linear approximations to the full problem but produce qualitatively similar results. The parallel algorithm can be implemented to produce identical answers with the state variable prior ensembles arbitrarily partitioned onto a set of processors for the assimilation step (no caveat on round-off is needed for this result).

Example implementations of the parallel algorithm are described for environments with low (high) communication latency and cost. Hybrids of these implementations and the traditional sequential ensemble filter can be designed to optimize performance for a variety of parallel computing environments. For large models on machines with good communications, it is possible to implement the parallel algorithm to scale efficiently to thousands of processors while bit-wise reproducing the results from a single processor implementation. Timing results on several Linux clusters are presented from an implementation appropriate for machines with low-latency communication.

Most ensemble Kalman filter variants that have appeared in the literature differ only in the details of how a prior ensemble estimate of a scalar observation is updated given an observed value and the observational error distribution. These details do not impact other parts of either the sequential or parallel filter algorithms here, so a variety of ensemble filters including ensemble square root and perturbed observations filters can be used with all the implementations described.

## Abstract

A variant of a least squares ensemble (Kalman) filter that is suitable for implementation on parallel architectures is presented. This parallel ensemble filter produces results that are identical to those from sequential algorithms already described in the literature when forward observation operators that relate the model state vector to the expected value of observations are linear (although actual results may differ due to floating point arithmetic round-off error). For nonlinear forward observation operators, the sequential and parallel algorithms solve different linear approximations to the full problem but produce qualitatively similar results. The parallel algorithm can be implemented to produce identical answers with the state variable prior ensembles arbitrarily partitioned onto a set of processors for the assimilation step (no caveat on round-off is needed for this result).

Example implementations of the parallel algorithm are described for environments with low (high) communication latency and cost. Hybrids of these implementations and the traditional sequential ensemble filter can be designed to optimize performance for a variety of parallel computing environments. For large models on machines with good communications, it is possible to implement the parallel algorithm to scale efficiently to thousands of processors while bit-wise reproducing the results from a single processor implementation. Timing results on several Linux clusters are presented from an implementation appropriate for machines with low-latency communication.

Most ensemble Kalman filter variants that have appeared in the literature differ only in the details of how a prior ensemble estimate of a scalar observation is updated given an observed value and the observational error distribution. These details do not impact other parts of either the sequential or parallel filter algorithms here, so a variety of ensemble filters including ensemble square root and perturbed observations filters can be used with all the implementations described.

## Abstract

A forced, nonlinear barotropic model on the sphere is shown to simulate some of the structure of the observed Northern Hemisphere midlatitude storm tracks with reasonable accuracy. For the parameter range chosen, the model has no unstable modes with significant amplitude in the storm track regions; however, several decaying modes with structures similar to the storm track are discovered. The model's midlatitude storm tracks also coincide with the location of a waveguide that is obtained by assuming that the horizontal variation of the time-mean flow is small compared with the scale of the transient eddies. Since the model is able to mimic the behavior of the observed storm tracks without any baroclinic dynamics, it is argued that the barotropic waveguide effects of the time-mean background flow acting on individual eddies are partially responsible for the observed storm track structure.

## Abstract

A forced, nonlinear barotropic model on the sphere is shown to simulate some of the structure of the observed Northern Hemisphere midlatitude storm tracks with reasonable accuracy. For the parameter range chosen, the model has no unstable modes with significant amplitude in the storm track regions; however, several decaying modes with structures similar to the storm track are discovered. The model's midlatitude storm tracks also coincide with the location of a waveguide that is obtained by assuming that the horizontal variation of the time-mean flow is small compared with the scale of the transient eddies. Since the model is able to mimic the behavior of the observed storm tracks without any baroclinic dynamics, it is argued that the barotropic waveguide effects of the time-mean background flow acting on individual eddies are partially responsible for the observed storm track structure.