Search Results

You are looking at 1 - 10 of 66 items for

  • Author or Editor: Jeffrey Anderson x
  • Refine by Access: All Content x
Clear All Modify Search
Jeffrey L. Anderson

Abstract

The unstable normal modes of the barotropic vorticity equation, linearized around an observed zonally varying atmospheric flow, have been related to patterns of observed low-frequency variability. The sensitivity of this problem to changes in the model truncation and diffusion and to details of the basic state flow are examined. Normal modes that are highly sensitive to these changes are found to be of minimal relevance to the low-frequency variability of the atmosphere.

A new numerical method capable of efficiently finding a number of the most unstable modes of large eigenvalue problems is used to examine the effects of model truncation on the instability problem. Most previous studies are found to have utilized models of insufficiently high resolution. A small subset of unstable modes is found to be robust to changes in truncation. Sensitivity to changes in diffusion in a low-resolution model can partially reproduce the truncation results.

Sensitivity to the basic state is examined using a matrix method and by examining the normal modes of perturbed basic states. Again, a small subset of unstable normal modes is found to be robust. These modes appear to agree better with observed patterns of low-frequency variability than do less robust unstable modes.

Full access
Jeffrey L. Anderson

Abstract

A robust algorithm, capable of finding nearly stationary solutions of the unforced barotropic vorticity equation near to observed atmospheric streamfunctions, is presented. When applied to observed persistent anomaly patterns, the nearly stationary states (NSSs) produced by the algorithm usually have a distinctive appearance. NSSs produced for observed blocks tend to have even stronger blocks, and NSSs for intense jet anomaly patterns have intense jets. When applied to observed patterns that are not associated with persistent anomalies, the algorithm produces low-amplitude relatively zonal NSSs. The blocking and intense jet anomaly NSSs bear a striking resemblance to previously derived analytic stationary solutions of the vorticity equation. In particular, NSS blocking states are similar to certain types of modons.The algorithm is applied to a number of modified observed flows to better document what features of an observed pattern determine the nature of the resulting NSS. The short-wave components of an observed pattern need not be present in order for the algorithm to find interesting zonally varying NSSs. However, short waves play an essential part in the resulting NSSs by balancing the long-wave time tendencies. All the NSSs discovered are unstable to the introduction of small perturbations in the barotropic vorticity equation. Despite this instability, the NSSs still persist for many days when integrated in time. The existence of these persistent NSSs may play a significant role in the appearance and subsequent longevity of persistent anomaly patterns in the atmosphere.

Full access
Jeffrey L. Anderson

Abstract

Many methods using ensemble integrations of prediction models as integral parts of data assimilation have appeared in the atmospheric and oceanic literature. In general, these methods have been derived from the Kalman filter and have been known as ensemble Kalman filters. A more general class of methods including these ensemble Kalman filter methods is derived starting from the nonlinear filtering problem. When working in a joint state–observation space, many features of ensemble filtering algorithms are easier to derive and compare. The ensemble filter methods derived here make a (local) least squares assumption about the relation between prior distributions of an observation variable and model state variables. In this context, the update procedure applied when a new observation becomes available can be described in two parts. First, an update increment is computed for each prior ensemble estimate of the observation variable by applying a scalar ensemble filter. Second, a linear regression of the prior ensemble sample of each state variable on the observation variable is performed to compute update increments for each state variable ensemble member from corresponding observation variable increments. The regression can be applied globally or locally using Gaussian kernel methods.

Several previously documented ensemble Kalman filter methods, the perturbed observation ensemble Kalman filter and ensemble adjustment Kalman filter, are developed in this context. Some new ensemble filters that extend beyond the Kalman filter context are also discussed. The two-part method can provide a computationally efficient implementation of ensemble filters and allows more straightforward comparison of methods since they differ only in the solution of a scalar filtering problem.

Full access
Jeffrey L. Anderson

Abstract

It is possible to describe many variants of ensemble Kalman filters without loss of generality as the impact of a single observation on a single state variable. For most ensemble algorithms commonly applied to Earth system models, the computation of increments for the observation variable ensemble can be treated as a separate step from computing increments for the state variable ensemble. The state variable increments are normally computed from the observation increments by linear regression using the prior bivariate ensemble of the state and observation variable. Here, a new method that replaces the standard regression with a regression using the bivariate rank statistics is described. This rank regression is expected to be most effective when the relation between a state variable and an observation is nonlinear. The performance of standard versus rank regression is compared for both linear and nonlinear forward operators (also known as observation operators) using a low-order model. Rank regression in combination with a rank histogram filter in observation space produces better analyses than standard regression for cases with nonlinear forward operators and relatively large analysis error. Standard regression, in combination with either a rank histogram filter or an ensemble Kalman filter in observation space, produces the best results in other situations.

Full access
Jeffrey L. Anderson

Abstract

A number of operational atmospheric prediction centers now produce ensemble forecasts of the atmosphere. Because of the high-dimensional phase spaces associated with operational forecast models, many centers use constraints derived from the dynamics of the forecast model to define a greatly reduced subspace from which ensemble initial conditions are chosen. For instance, the European Centre for Medium-Range Weather Forecasts uses singular vectors of the forecast model and the National Centers for Environmental Prediction use the “breeding cycle” to determine a limited set of directions in phase space that are sampled by the ensemble forecast.

The use of dynamical constraints on the selection of initial conditions for ensemble forecasts is examined in a perfect model study using a pair of three-variable dynamical systems and a prescribed observational error distribution. For these systems, one can establish that the direct use of dynamical constraints has no impact on the error of the ensemble mean forecast and a negative impact on forecasts of higher-moment quantities such as forecast spread. Simple examples are presented to show that this is not a result of the use of low-order dynamical systems but is instead related to the fundamental nature of the dynamics of these particular low-order systems themselves. Unless operational prediction models have fundamentally different dynamics, this study suggests that the use of dynamically constrained ensembles may not be justified. Further studies with more realistic prediction models are needed to evaluate this possibility.

Full access
Jeffrey L. Anderson

Abstract

Nearly stationary states (NSSs) of the barotropic vorticity equation (BVE) on the sphere that are closely related to observed atmospheric blocking patterns have recently been derived. Examining the way such NSSs affect integrations of the BVE is of interest. Unfortunately, the BVE rapidly evolves away from the neighborhood of blocking NSSs due to instability and never again generates sufficient amplitude to return to the vicinity of the blocking NSSs. However, forced versions of the BVE with both a high amplitude blocking NSS and more zonal low amplitude NSSs can be constructed. For certain parameter ranges, extended integrations of these forced BVFs exhibit two “regimes,” one strongly blocked and the other relatively zonal. Somewhat realistic simulators of low and high frequency variability and individual blocking event life cycles are also produced by these forced barotropic models. It is argued here that these regimes are related to “attractor-like” behavior of the NSSs of the forced BVE. Strong barotropic short waves apparently provide the push needed to cause a transition to or from the blocked regime. In the purely barotropic model used here, there is a rather delicate balance required between the forcing strength for different spatial scales in order to produce regimelike behavior. However, the mechanism proposed appears to be a viable candidate for explaining the observed behavior of blocking events in the atmosphere.

Full access
Jeffrey L. Anderson

Abstract

An objective criterion for identifying blocking events is applied to a ten-year climate run of the National Meteorological Center's Medium-Range Forecast Model (MRF) and to observations. The climatology of blocking in the ten-year run is found to be somewhat realistic in the Northern Hemisphere, although when averaged over all longitudes and seasons a general lack of blocking is found. Previous studies have suggested that numerical models are incapable of producing realistic numbers of blocks, however, the ten-year model run is able to produce realistic numbers of blocks for selected geographic regions and seasons. In these regions, blocks are found to persist longer than observed blocking events. The ten-year run of the model is also able to reproduce the average longitudinal extent and motion of the observed blocks. These results suggest that the MRF is able to generate and persist realistic blocks, but only at longitudes and seasons for which the underlying model climate is conducive. In the Southern Hemisphere, the ten-year run blocking climatology is considerably less realistic. The appearance of “transient” blocking events in the model distinguishes it from the Southern Hemisphere observations and from the Northern Hemisphere.

A set of 60-day forecasts by the MRF is used to evaluate the evolution of the model blocking climatology with lead time (blocking climate drift) for a 90-day period in autumn of 1990. Although the ten-year run and observed blocking climates are quite similar at most longitudes at this time of year, it is found that blocking almost entirely disappears from the model forecasts at lead times of approximately 10 days before reappearing at leads greater than 15 days. It is argued that this lack of a direct transition between observed and model blocking climates is the result of a drift in the underlying climate (for example, the positions of the jet streams) in the MRF forecasts. If so, the climate drift of the MRF must he further reduced in order to produce more accurate medium-range forecasts of blocking events.

Full access
Jeffrey L. Anderson

Abstract

An extension to standard ensemble Kalman filter algorithms that can improve performance for non-Gaussian prior distributions, non-Gaussian likelihoods, and bounded state variables is described. The algorithm exploits the capability of the rank histogram filter (RHF) to represent arbitrary prior distributions for observed variables. The rank histogram algorithm can be applied directly to state variables to produce posterior marginal ensembles without the need for regression that is part of standard ensemble filters. These marginals are used to adjust the marginals obtained from a standard ensemble filter that uses regression to update state variables. The final posterior ensemble is obtained by doing an ordered replacement of the posterior marginal ensemble values from a standard ensemble filter with the values obtained from the rank histogram method applied directly to state variables; the algorithm is referred to as the marginal adjustment rank histogram filter (MARHF). Applications to idealized bivariate problems and low-order dynamical systems show that the MARHF can produce better results than standard ensemble methods for priors that are non-Gaussian. Like the original RHF, the MARHF can also make use of arbitrary non-Gaussian observation likelihoods. The MARHF also has advantages for problems with bounded state variables, for instance, the concentration of an atmospheric tracer. Bounds can be automatically respected in the posterior ensembles. With an efficient implementation of the MARHF, the additional cost has better scaling than the standard RHF.

Free access
Jeffrey L. Anderson

Abstract

A theory for estimating the probability distribution of the state of a model given a set of observations exists. This nonlinear filtering theory unifies the data assimilation and ensemble generation problem that have been key foci of prediction and predictability research for numerical weather and ocean prediction applications. A new algorithm, referred to as an ensemble adjustment Kalman filter, and the more traditional implementation of the ensemble Kalman filter in which “perturbed observations” are used, are derived as Monte Carlo approximations to the nonlinear filter. Both ensemble Kalman filter methods produce assimilations with small ensemble mean errors while providing reasonable measures of uncertainty in the assimilated variables. The ensemble methods can assimilate observations with a nonlinear relation to model state variables and can also use observations to estimate the value of imprecisely known model parameters. These ensemble filter methods are shown to have significant advantages over four-dimensional variational assimilation in low-order models and scale easily to much larger applications. Heuristic modifications to the filtering algorithms allow them to be applied efficiently to very large models by sequentially processing observations and computing the impact of each observation on each state variable in an independent calculation. The ensemble adjustment Kalman filter is applied to a nondivergent barotropic model on the sphere to demonstrate the capabilities of the filters in models with state spaces that are much larger than the ensemble size.

When observations are assimilated in the traditional ensemble Kalman filter, the resulting updated ensemble has a mean that is consistent with the value given by filtering theory, but only the expected value of the covariance of the updated ensemble is consistent with the theory. The ensemble adjustment Kalman filter computes a linear operator that is applied to the prior ensemble estimate of the state, resulting in an updated ensemble whose mean and also covariance are consistent with the theory. In the cases compared here, the ensemble adjustment Kalman filter performs significantly better than the traditional ensemble Kalman filter, apparently because noise introduced into the assimilated ensemble through perturbed observations in the traditional filter limits its relative performance. This superior performance may not occur for all problems and is expected to be most notable for small ensembles. Still, the results suggest that careful study of the capabilities of different varieties of ensemble Kalman filters is appropriate when exploring new applications.

Full access
Jeffrey L. Anderson

Abstract

Ensemble Kalman filters are widely used for data assimilation in large geophysical models. Good results with affordable ensemble sizes require enhancements to the basic algorithms to deal with insufficient ensemble variance and spurious ensemble correlations between observations and state variables. These challenges are often dealt with by using inflation and localization algorithms. A new method for understanding and reducing some ensemble filter errors is introduced and tested. The method assumes that sampling error due to small ensemble size is the primary source of error. Sampling error in the ensemble correlations between observations and state variables is reduced by estimating the distribution of correlations as part of the ensemble filter algorithm. This correlation error reduction (CER) algorithm can produce high-quality ensemble assimilations in low-order models without using any a priori localization like a specified localization function. The method is also applied in an observing system simulation experiment with a very coarse resolution dry atmospheric general circulation model. This demonstrates that the algorithm provides insight into the need for localization in large geophysical applications, suggesting that sampling error may be a primary cause in some cases.

Full access