# Search Results

## You are looking at 1 - 10 of 59 items for :

- Author or Editor: Jeffrey Anderson x

- Article x

- Refine by Access: All Content x

## Abstract

The unstable normal modes of the barotropic vorticity equation, linearized around an observed zonally varying atmospheric flow, have been related to patterns of observed low-frequency variability. The sensitivity of this problem to changes in the model truncation and diffusion and to details of the basic state flow are examined. Normal modes that are highly sensitive to these changes are found to be of minimal relevance to the low-frequency variability of the atmosphere.

A new numerical method capable of efficiently finding a number of the most unstable modes of large eigenvalue problems is used to examine the effects of model truncation on the instability problem. Most previous studies are found to have utilized models of insufficiently high resolution. A small subset of unstable modes is found to be robust to changes in truncation. Sensitivity to changes in diffusion in a low-resolution model can partially reproduce the truncation results.

Sensitivity to the basic state is examined using a matrix method and by examining the normal modes of perturbed basic states. Again, a small subset of unstable normal modes is found to be robust. These modes appear to agree better with observed patterns of low-frequency variability than do less robust unstable modes.

## Abstract

The unstable normal modes of the barotropic vorticity equation, linearized around an observed zonally varying atmospheric flow, have been related to patterns of observed low-frequency variability. The sensitivity of this problem to changes in the model truncation and diffusion and to details of the basic state flow are examined. Normal modes that are highly sensitive to these changes are found to be of minimal relevance to the low-frequency variability of the atmosphere.

A new numerical method capable of efficiently finding a number of the most unstable modes of large eigenvalue problems is used to examine the effects of model truncation on the instability problem. Most previous studies are found to have utilized models of insufficiently high resolution. A small subset of unstable modes is found to be robust to changes in truncation. Sensitivity to changes in diffusion in a low-resolution model can partially reproduce the truncation results.

Sensitivity to the basic state is examined using a matrix method and by examining the normal modes of perturbed basic states. Again, a small subset of unstable normal modes is found to be robust. These modes appear to agree better with observed patterns of low-frequency variability than do less robust unstable modes.

## Abstract

A general framework for deterministic univariate ensemble filtering is presented. The framework fits a continuous prior probability density function (PDF) to the prior ensemble. A functional representation for the observation likelihood is combined with the prior PDF to get a continuous analysis (posterior) PDF. Cumulative distribution functions for the prior and analysis are also required. The key innovation is that an analysis ensemble is computed so that the quantile of each ensemble member is the same as its prior quantile. Many choices for the prior PDF family and the likelihood function are described. A choice of normal prior with normal likelihood is equivalent to the ensemble adjustment Kalman filter. Some other choices for the prior include gamma, inverse gamma, beta, beta prime, lognormal, and exponential distributions. Both prior distributions and likelihoods can be defined over a set of intervals giving additional flexibility that can be used to implement methods like a Huber likelihood for observations with occasional outliers. Priors and likelihoods can also be defined as sums of distributions allowing choices like bivariate normals or kernel filters. Empirical distributions, for instance piecewise linear approximations to arbitrary PDFs and functions can be used. Another empirical choice leads to the rank histogram filter. Results here are univariate and can be used to compute increments for observed variables or marginal distributions for any variable for a reanalysis. Linear regression of increments can be used to update state variables in a serial filter to build a comprehensive data assimilation system. Part 2 will discuss other methods for extending the framework to multivariate data assimilation.

### Significance Statement

Data assimilation is used to combine information from model forecasts with subsequent observations to obtain better estimates of the current state of the atmosphere or other parts of the Earth system. Ensemble data assimilation uses a number of forecasts to get more information about uncertainty. A new method allows much more flexibility in the assumptions that must be made when doing ensemble data assimilation. As an example, the method can be better for quantities that are bounded like the amount of an atmospheric trace pollutant.

## Abstract

A general framework for deterministic univariate ensemble filtering is presented. The framework fits a continuous prior probability density function (PDF) to the prior ensemble. A functional representation for the observation likelihood is combined with the prior PDF to get a continuous analysis (posterior) PDF. Cumulative distribution functions for the prior and analysis are also required. The key innovation is that an analysis ensemble is computed so that the quantile of each ensemble member is the same as its prior quantile. Many choices for the prior PDF family and the likelihood function are described. A choice of normal prior with normal likelihood is equivalent to the ensemble adjustment Kalman filter. Some other choices for the prior include gamma, inverse gamma, beta, beta prime, lognormal, and exponential distributions. Both prior distributions and likelihoods can be defined over a set of intervals giving additional flexibility that can be used to implement methods like a Huber likelihood for observations with occasional outliers. Priors and likelihoods can also be defined as sums of distributions allowing choices like bivariate normals or kernel filters. Empirical distributions, for instance piecewise linear approximations to arbitrary PDFs and functions can be used. Another empirical choice leads to the rank histogram filter. Results here are univariate and can be used to compute increments for observed variables or marginal distributions for any variable for a reanalysis. Linear regression of increments can be used to update state variables in a serial filter to build a comprehensive data assimilation system. Part 2 will discuss other methods for extending the framework to multivariate data assimilation.

### Significance Statement

Data assimilation is used to combine information from model forecasts with subsequent observations to obtain better estimates of the current state of the atmosphere or other parts of the Earth system. Ensemble data assimilation uses a number of forecasts to get more information about uncertainty. A new method allows much more flexibility in the assumptions that must be made when doing ensemble data assimilation. As an example, the method can be better for quantities that are bounded like the amount of an atmospheric trace pollutant.

## Abstract

Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of methods including these ensemble Kalman filter methods is derived starting from the nonlinear filtering problem. When working in a joint state–observation space, many features of ensemble filtering algorithms are easier to derive and compare. The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables. In this context, the update procedure applied when a new observation becomes available can be described in two parts. First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter. Second, a linear regression of the prior ensemble sample of each state variable on the observation variable is performed to compute update increments for each state variable ensemble member from corresponding observation variable increments. The regression can be applied globally or locally using Gaussian kernel methods.

Several previously documented ensemble Kalman filter methods, the perturbed observation ensemble Kalman filter and ensemble adjustment Kalman filter, are developed in this context. Some new ensemble filters that extend beyond the Kalman filter context are also discussed. The two-part method can provide a computationally efficient implementation of ensemble filters and allows more straightforward comparison of methods since they differ only in the solution of a scalar filtering problem.

## Abstract

Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of methods including these ensemble Kalman filter methods is derived starting from the nonlinear filtering problem. When working in a joint state–observation space, many features of ensemble filtering algorithms are easier to derive and compare. The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables. In this context, the update procedure applied when a new observation becomes available can be described in two parts. First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter. Second, a linear regression of the prior ensemble sample of each state variable on the observation variable is performed to compute update increments for each state variable ensemble member from corresponding observation variable increments. The regression can be applied globally or locally using Gaussian kernel methods.

Several previously documented ensemble Kalman filter methods, the perturbed observation ensemble Kalman filter and ensemble adjustment Kalman filter, are developed in this context. Some new ensemble filters that extend beyond the Kalman filter context are also discussed. The two-part method can provide a computationally efficient implementation of ensemble filters and allows more straightforward comparison of methods since they differ only in the solution of a scalar filtering problem.

## Abstract

Ensemble Kalman filters are widely used for data assimilation in large geophysical models. Good results with affordable ensemble sizes require enhancements to the basic algorithms to deal with insufficient ensemble variance and spurious ensemble correlations between observations and state variables. These challenges are often dealt with by using inflation and localization algorithms. A new method for understanding and reducing some ensemble filter errors is introduced and tested. The method assumes that sampling error due to small ensemble size is the primary source of error. Sampling error in the ensemble correlations between observations and state variables is reduced by estimating the distribution of correlations as part of the ensemble filter algorithm. This correlation error reduction (CER) algorithm can produce high-quality ensemble assimilations in low-order models without using any a priori localization like a specified localization function. The method is also applied in an observing system simulation experiment with a very coarse resolution dry atmospheric general circulation model. This demonstrates that the algorithm provides insight into the need for localization in large geophysical applications, suggesting that sampling error may be a primary cause in some cases.

## Abstract

Ensemble Kalman filters are widely used for data assimilation in large geophysical models. Good results with affordable ensemble sizes require enhancements to the basic algorithms to deal with insufficient ensemble variance and spurious ensemble correlations between observations and state variables. These challenges are often dealt with by using inflation and localization algorithms. A new method for understanding and reducing some ensemble filter errors is introduced and tested. The method assumes that sampling error due to small ensemble size is the primary source of error. Sampling error in the ensemble correlations between observations and state variables is reduced by estimating the distribution of correlations as part of the ensemble filter algorithm. This correlation error reduction (CER) algorithm can produce high-quality ensemble assimilations in low-order models without using any a priori localization like a specified localization function. The method is also applied in an observing system simulation experiment with a very coarse resolution dry atmospheric general circulation model. This demonstrates that the algorithm provides insight into the need for localization in large geophysical applications, suggesting that sampling error may be a primary cause in some cases.

## Abstract

An objective criterion for identifying blocking events is applied to a ten-year climate run of the National Meteorological Center's Medium-Range Forecast Model (MRF) and to observations. The climatology of blocking in the ten-year run is found to be somewhat realistic in the Northern Hemisphere, although when averaged over all longitudes and seasons a general lack of blocking is found. Previous studies have suggested that numerical models are incapable of producing realistic numbers of blocks, however, the ten-year model run is able to produce realistic numbers of blocks for selected geographic regions and seasons. In these regions, blocks are found to persist longer than observed blocking events. The ten-year run of the model is also able to reproduce the average longitudinal extent and motion of the observed blocks. These results suggest that the MRF is able to generate and persist realistic blocks, but only at longitudes and seasons for which the underlying model climate is conducive. In the Southern Hemisphere, the ten-year run blocking climatology is considerably less realistic. The appearance of “transient” blocking events in the model distinguishes it from the Southern Hemisphere observations and from the Northern Hemisphere.

A set of 60-day forecasts by the MRF is used to evaluate the evolution of the model blocking climatology with lead time (blocking climate drift) for a 90-day period in autumn of 1990. Although the ten-year run and observed blocking climates are quite similar at most longitudes at this time of year, it is found that blocking almost entirely disappears from the model forecasts at lead times of approximately 10 days before reappearing at leads greater than 15 days. It is argued that this lack of a direct transition between observed and model blocking climates is the result of a drift in the underlying climate (for example, the positions of the jet streams) in the MRF forecasts. If so, the climate drift of the MRF must he further reduced in order to produce more accurate medium-range forecasts of blocking events.

## Abstract

An objective criterion for identifying blocking events is applied to a ten-year climate run of the National Meteorological Center's Medium-Range Forecast Model (MRF) and to observations. The climatology of blocking in the ten-year run is found to be somewhat realistic in the Northern Hemisphere, although when averaged over all longitudes and seasons a general lack of blocking is found. Previous studies have suggested that numerical models are incapable of producing realistic numbers of blocks, however, the ten-year model run is able to produce realistic numbers of blocks for selected geographic regions and seasons. In these regions, blocks are found to persist longer than observed blocking events. The ten-year run of the model is also able to reproduce the average longitudinal extent and motion of the observed blocks. These results suggest that the MRF is able to generate and persist realistic blocks, but only at longitudes and seasons for which the underlying model climate is conducive. In the Southern Hemisphere, the ten-year run blocking climatology is considerably less realistic. The appearance of “transient” blocking events in the model distinguishes it from the Southern Hemisphere observations and from the Northern Hemisphere.

A set of 60-day forecasts by the MRF is used to evaluate the evolution of the model blocking climatology with lead time (blocking climate drift) for a 90-day period in autumn of 1990. Although the ten-year run and observed blocking climates are quite similar at most longitudes at this time of year, it is found that blocking almost entirely disappears from the model forecasts at lead times of approximately 10 days before reappearing at leads greater than 15 days. It is argued that this lack of a direct transition between observed and model blocking climates is the result of a drift in the underlying climate (for example, the positions of the jet streams) in the MRF forecasts. If so, the climate drift of the MRF must he further reduced in order to produce more accurate medium-range forecasts of blocking events.

## Abstract

A number of operational atmospheric prediction centers now produce ensemble forecasts of the atmosphere. Because of the high-dimensional phase spaces associated with operational forecast models, many centers use constraints derived from the dynamics of the forecast model to define a greatly reduced subspace from which ensemble initial conditions are chosen. For instance, the European Centre for Medium-Range Weather Forecasts uses singular vectors of the forecast model and the National Centers for Environmental Prediction use the “breeding cycle” to determine a limited set of directions in phase space that are sampled by the ensemble forecast.

The use of dynamical constraints on the selection of initial conditions for ensemble forecasts is examined in a perfect model study using a pair of three-variable dynamical systems and a prescribed observational error distribution. For these systems, one can establish that the direct use of dynamical constraints has no impact on the error of the ensemble mean forecast and a negative impact on forecasts of higher-moment quantities such as forecast spread. Simple examples are presented to show that this is not a result of the use of low-order dynamical systems but is instead related to the fundamental nature of the dynamics of these particular low-order systems themselves. Unless operational prediction models have fundamentally different dynamics, this study suggests that the use of dynamically constrained ensembles may not be justified. Further studies with more realistic prediction models are needed to evaluate this possibility.

## Abstract

A number of operational atmospheric prediction centers now produce ensemble forecasts of the atmosphere. Because of the high-dimensional phase spaces associated with operational forecast models, many centers use constraints derived from the dynamics of the forecast model to define a greatly reduced subspace from which ensemble initial conditions are chosen. For instance, the European Centre for Medium-Range Weather Forecasts uses singular vectors of the forecast model and the National Centers for Environmental Prediction use the “breeding cycle” to determine a limited set of directions in phase space that are sampled by the ensemble forecast.

The use of dynamical constraints on the selection of initial conditions for ensemble forecasts is examined in a perfect model study using a pair of three-variable dynamical systems and a prescribed observational error distribution. For these systems, one can establish that the direct use of dynamical constraints has no impact on the error of the ensemble mean forecast and a negative impact on forecasts of higher-moment quantities such as forecast spread. Simple examples are presented to show that this is not a result of the use of low-order dynamical systems but is instead related to the fundamental nature of the dynamics of these particular low-order systems themselves. Unless operational prediction models have fundamentally different dynamics, this study suggests that the use of dynamically constrained ensembles may not be justified. Further studies with more realistic prediction models are needed to evaluate this possibility.

## Abstract

A deterministic square root ensemble Kalman filter and a stochastic perturbed observation ensemble Kalman filter are used for data assimilation in both linear and nonlinear single variable dynamical systems. For the linear system, the deterministic filter is simply a method for computing the Kalman filter and is optimal while the stochastic filter has suboptimal performance due to sampling error. For the nonlinear system, the deterministic filter has increasing error as ensemble size increases because all ensemble members but one become tightly clustered. In this case, the stochastic filter performs better for sufficiently large ensembles. A new method for computing ensemble increments in observation space is proposed that does not suffer from the pathological behavior of the deterministic filter while avoiding much of the sampling error of the stochastic filter. This filter uses the order statistics of the prior observation space ensemble to create an approximate continuous prior probability distribution in a fashion analogous to the use of rank histograms for ensemble forecast evaluation. This rank histogram filter can represent non-Gaussian observation space priors and posteriors and is shown to be competitive with existing filters for problems as large as global numerical weather prediction. The ability to represent non-Gaussian distributions is useful for a variety of applications such as convective-scale assimilation and assimilation of bounded quantities such as relative humidity.

## Abstract

A deterministic square root ensemble Kalman filter and a stochastic perturbed observation ensemble Kalman filter are used for data assimilation in both linear and nonlinear single variable dynamical systems. For the linear system, the deterministic filter is simply a method for computing the Kalman filter and is optimal while the stochastic filter has suboptimal performance due to sampling error. For the nonlinear system, the deterministic filter has increasing error as ensemble size increases because all ensemble members but one become tightly clustered. In this case, the stochastic filter performs better for sufficiently large ensembles. A new method for computing ensemble increments in observation space is proposed that does not suffer from the pathological behavior of the deterministic filter while avoiding much of the sampling error of the stochastic filter. This filter uses the order statistics of the prior observation space ensemble to create an approximate continuous prior probability distribution in a fashion analogous to the use of rank histograms for ensemble forecast evaluation. This rank histogram filter can represent non-Gaussian observation space priors and posteriors and is shown to be competitive with existing filters for problems as large as global numerical weather prediction. The ability to represent non-Gaussian distributions is useful for a variety of applications such as convective-scale assimilation and assimilation of bounded quantities such as relative humidity.

## Abstract

Nearly stationary states (NSSs) of the barotropic vorticity equation (BVE) on the sphere that are closely related to observed atmospheric blocking patterns have recently been derived. Examining the way such NSSs affect integrations of the BVE is of interest. Unfortunately, the BVE rapidly evolves away from the neighborhood of blocking NSSs due to instability and never again generates sufficient amplitude to return to the vicinity of the blocking NSSs. However, forced versions of the BVE with both a high amplitude blocking NSS and more zonal low amplitude NSSs can be constructed. For certain parameter ranges, extended integrations of these forced BVFs exhibit two “regimes,” one strongly blocked and the other relatively zonal. Somewhat realistic simulators of low and high frequency variability and individual blocking event life cycles are also produced by these forced barotropic models. It is argued here that these regimes are related to “attractor-like” behavior of the NSSs of the forced BVE. Strong barotropic short waves apparently provide the push needed to cause a transition to or from the blocked regime. In the purely barotropic model used here, there is a rather delicate balance required between the forcing strength for different spatial scales in order to produce regimelike behavior. However, the mechanism proposed appears to be a viable candidate for explaining the observed behavior of blocking events in the atmosphere.

## Abstract

Nearly stationary states (NSSs) of the barotropic vorticity equation (BVE) on the sphere that are closely related to observed atmospheric blocking patterns have recently been derived. Examining the way such NSSs affect integrations of the BVE is of interest. Unfortunately, the BVE rapidly evolves away from the neighborhood of blocking NSSs due to instability and never again generates sufficient amplitude to return to the vicinity of the blocking NSSs. However, forced versions of the BVE with both a high amplitude blocking NSS and more zonal low amplitude NSSs can be constructed. For certain parameter ranges, extended integrations of these forced BVFs exhibit two “regimes,” one strongly blocked and the other relatively zonal. Somewhat realistic simulators of low and high frequency variability and individual blocking event life cycles are also produced by these forced barotropic models. It is argued here that these regimes are related to “attractor-like” behavior of the NSSs of the forced BVE. Strong barotropic short waves apparently provide the push needed to cause a transition to or from the blocked regime. In the purely barotropic model used here, there is a rather delicate balance required between the forcing strength for different spatial scales in order to produce regimelike behavior. However, the mechanism proposed appears to be a viable candidate for explaining the observed behavior of blocking events in the atmosphere.

## Abstract

An extremely simple chaotic model, the three-variable Lorenz convective model, is used in a perfect model setting to study the selection of initial conditions for ensemble forecasts. Observations with a known distribution of error are sampled from the “climate” of the simple model. Initial condition distributions that use only information about the observation and the observational error distribution (i.e., traditional Monte Carlo methods) are shown to differ from the correct initial condition distributions, which make use of additional information about the local structure of the model's attractor. Three relatively inexpensive algorithms for finding the local attractor structure in a simple model are examined; these make use of singular vectors. normal modes, and perturbed integrations. All of these are related to heuristic algorithms that have been applied to select ensemble members in operational forecast models. The method of perturbed integrations, which is somewhat similar to the “breeding” method used at the National Meteorological Center, is shown to be the most effective in this context. Validating the extension of such methods to realistic models is expected to be extremely difficult; however, it seems reasonable that utilizing all available information about the attractor structure of real forecast models when selecting ensemble initial conditions could improve the success of operational ensemble forecasts.

## Abstract

An extremely simple chaotic model, the three-variable Lorenz convective model, is used in a perfect model setting to study the selection of initial conditions for ensemble forecasts. Observations with a known distribution of error are sampled from the “climate” of the simple model. Initial condition distributions that use only information about the observation and the observational error distribution (i.e., traditional Monte Carlo methods) are shown to differ from the correct initial condition distributions, which make use of additional information about the local structure of the model's attractor. Three relatively inexpensive algorithms for finding the local attractor structure in a simple model are examined; these make use of singular vectors. normal modes, and perturbed integrations. All of these are related to heuristic algorithms that have been applied to select ensemble members in operational forecast models. The method of perturbed integrations, which is somewhat similar to the “breeding” method used at the National Meteorological Center, is shown to be the most effective in this context. Validating the extension of such methods to realistic models is expected to be extremely difficult; however, it seems reasonable that utilizing all available information about the attractor structure of real forecast models when selecting ensemble initial conditions could improve the success of operational ensemble forecasts.

## Abstract

Localization is a method for reducing the impact of sampling errors in ensemble Kalman filters. Here, the regression coefficient, or gain, relating ensemble increments for observed quantity *y* to increments for state variable *x* is multiplied by a real number *α* defined as a localization. Localization of the impact of observations on model state variables is required for good performance when applying ensemble data assimilation to large atmospheric and oceanic problems. Localization also improves performance in idealized low-order ensemble assimilation applications. An algorithm that computes localization from the output of an ensemble observing system simulation experiment (OSSE) is described. The algorithm produces localizations for sets of pairs of observations and state variables: for instance, all state variables that are between 300- and 400-km horizontal distance from an observation. The algorithm is applied in a low-order model to produce localizations from the output of an OSSE and the computed localizations are then used in a new OSSE. Results are compared to assimilations using tuned localizations that are approximately Gaussian functions of the distance between an observation and a state variable. In most cases, the empirically computed localizations produce the lowest root-mean-square errors in subsequent OSSEs. Localizations derived from OSSE output can provide guidance for localization in real assimilation experiments. Applying the algorithm in large geophysical applications may help to tune localization for improved ensemble filter performance.

## Abstract

Localization is a method for reducing the impact of sampling errors in ensemble Kalman filters. Here, the regression coefficient, or gain, relating ensemble increments for observed quantity *y* to increments for state variable *x* is multiplied by a real number *α* defined as a localization. Localization of the impact of observations on model state variables is required for good performance when applying ensemble data assimilation to large atmospheric and oceanic problems. Localization also improves performance in idealized low-order ensemble assimilation applications. An algorithm that computes localization from the output of an ensemble observing system simulation experiment (OSSE) is described. The algorithm produces localizations for sets of pairs of observations and state variables: for instance, all state variables that are between 300- and 400-km horizontal distance from an observation. The algorithm is applied in a low-order model to produce localizations from the output of an OSSE and the computed localizations are then used in a new OSSE. Results are compared to assimilations using tuned localizations that are approximately Gaussian functions of the distance between an observation and a state variable. In most cases, the empirically computed localizations produce the lowest root-mean-square errors in subsequent OSSEs. Localizations derived from OSSE output can provide guidance for localization in real assimilation experiments. Applying the algorithm in large geophysical applications may help to tune localization for improved ensemble filter performance.