Search Results

You are looking at 1 - 10 of 49 items for

  • Author or Editor: Craig H. Bishop x
  • All content x
Clear All Modify Search
Craig H. Bishop

Abstract

Free space Green's functions may be used to reconstruct the wind over a limited domain from vorticity, divergence, and the wind at the boundary of the domain. When standard finite-difference estimates of the vorticity/divergence field are used, the technique fails to accurately reconstruct the wind in regions where the vorticity or divergence change markedly between grid boxes.

The standard finite-difference estimate of vorticity/divergence is accurate provided that the wind varies linearly with distance along the edges of grid boxes. An estimate of vorticity/divergence is derived that accounts for the fact that these requirements are not met when the vorticity/divergence field is locally heterogeneous. This improved estimate is named the G4 estimate. It is not derived from Taylor series.

When the G4 estimate of vorticity/divergence is used to reconstruct the wind field, the magnitude of the reconstruction error is an order of magnitude smaller than the reconstruction error associated with the standard estimate of vorticity/divergence.

Full access
Craig H. Bishop

Abstract

Theories of frontogenesis and frontal waves describe development in terms of the interaction of a basic state or environmental flow with a frontal flow. The basic-state flow may comprise a large-scale confluent–diffluent deformation field and/or an alongfront temperature gradient. The frontal flow is seen as evolving as a result of its interaction with the environmental flow. Such theories make specific predictions about the effect of the basic-state flow on the frontal flow. To test these predictions, counterparts of the basic-state flows and frontal flows used in theoretical models must be extracted from atmospheric data. Here the concept of attribution is used to identify such counterparts.

In the present context, attribution refers to the process whereby a part of the wind field is attributed to a part of the vorticity or divergence field. It is mathematically equivalent to the process by which a part of a field of electric potential is associated with an element of total charge density in electrostatics.

The counterpart of the frontal flow used in idealized models is identified as that part of the flow attributable to the vorticity and divergence anomalies within the frontal region. The counterpart of the basic-state flow is identified as that part of the flow attributable to vorticity and divergence anomalies outside the frontal region.

Applications of the partitioning method are illustrated by diagnosing the flow associated with a North Atlantic front. The way in which the partitioning method may be used to test some theories concerning the effect of large-scale deformation on frontal wave formation is described. The partitioning method's ability to distinguish frontogenesis due to environmental flow from that due to frontal flow is also discussed. The analyzed front is found to lie at an angle to the dilatation axis of the environmental flow. It is argued that this feature must be common to all nonrotating finite length fronts.

Full access
Sergey Frolov and Craig H. Bishop

Abstract

Hybrid error covariance models that blend climatological estimates of forecast error covariances with ensemble-based, flow-dependent forecast error covariances have led to significant reductions in forecast error when employed in 4DVAR data assimilation schemes. Tangent linear models (TLMs) designed to predict the differences between perturbed and unperturbed simulations of the weather forecast are a key component of such 4DVAR schemes. However, many forecasting centers have found that TLMs and their adjoints do not scale well computationally and are difficult to create and maintain—particularly for coupled ocean–wave–ice–atmosphere models. In this paper, the authors create ensemble-based TLMs (ETLMs) and test their ability to propagate both climatological and flow-dependent parts of hybrid error covariance models. These tests demonstrate that rank deficiency limits the utility of unlocalized ETLMs. High-rank, time-evolving, flow-adaptive localization functions are constructed and tested using recursive application of short-duration ETLMs, each of which is localized using a static localization. Since TLM operators do not need to be semipositive definite, the authors experiment with a variety of localization approaches including step function localization. The step function localization leads to a local formulation that was found to be highly effective. In tests using simple one-dimensional models with both dispersive and nondispersive dynamics, it is shown that practical ETLM configurations were effective at propagating covariances as far as four error correlation scales.

Full access
Craig H. Bishop and Daniel Hodyss

Abstract

An adaptive ensemble covariance localization technique, previously used in “local” forms of the ensemble Kalman filter, is extended to a global ensemble four-dimensional variational data assimilation (4D-VAR) scheme. The purely adaptive part of the localization matrix considered is given by the element-wise square of the correlation matrix of a smoothed ensemble of streamfunction perturbations. It is found that these purely adaptive localization functions have spurious far-field correlations as large as 0.1 with a 128-member ensemble. To attenuate the spurious features of the purely adaptive localization functions, the authors multiply the adaptive localization functions with very broadscale nonadaptive localization functions. Using the Navy’s operational ensemble forecasting system, it is shown that the covariance localization functions obtained by this approach adapt to spatially anisotropic aspects of the flow, move with the flow, and are free of far-field spurious correlations. The scheme is made computationally feasible by (i) a method for inexpensively generating the square root of an adaptively localized global four-dimensional error covariance model in terms of products or modulations of smoothed ensemble perturbations with themselves and with raw ensemble perturbations, and (ii) utilizing algorithms that speed ensemble covariance localization when localization functions are separable, variable-type independent, and/or large scale. In spite of the apparently useful characteristics of adaptive localization, single analysis/forecast experiments assimilating 583 200 observations over both 6- and 12-h data assimilation windows failed to identify any significant difference in the quality of the analyses and forecasts obtained using nonadaptive localization from that obtained using adaptive localization.

Full access
Craig H. Bishop and Zoltan Toth

Abstract

Suppose that the geographical and temporal resolution of the observational network could be changed on a daily basis. Of all the possible deployments of observational resources, which particular deployment would minimize expected forecast error? The ensemble transform technique answers such questions by using nonlinear ensemble forecasts to rapidly construct ensemble-based approximations to the prediction error covariance matrices associated with a wide range of different possible deployments of observational resources. From these matrices, estimates of the expected forecast error associated with each distinct deployment of observational resources are obtained. The deployment that minimizes the chosen measure of forecast error is deemed optimal.

The technique may also be used to find the perturbation that evolves into the leading eigenvector or singular vector of an ensemble-based prediction error covariance matrix. This time-evolving perturbation “explains” more of the ensemble-based prediction error variance than any other perturbation. It may be interpreted as the fastest growing perturbation on the subspace of ensemble perturbations.

The ensemble-based approximations to the prediction error covariance matrices are constructed from transformation matrices derived from estimates of the analysis error covariance matrices associated with each possible deployment of observational resources. The authors prove that the ensemble transform technique would precisely recover the prediction error covariance matrices associated with each possible deployment of observational resources provided that (i) estimates of the analysis error covariance matrix were precise, (ii) the ensemble perturbations span the vector space of all possible perturbations, and (iii) the evolution of errors were linear and perfectly modeled. In the absence of such precise information, the ensemble transform technique links available information on analysis error covariances associated with different observational networks with error growth estimates contained in the ensemble forecast to estimate the optimal configuration of an adaptive observational network. Tests of the technique will be presented in subsequent publications. Here, the objective is to describe the theoretical basis of the technique and illustrate it with an example from the Fronts and Atlantic Storm Tracks Experiment (FASTEX).

Full access
Xuguang Wang and Craig H. Bishop

Abstract

The ensemble transform Kalman filter (ETKF) ensemble forecast scheme is introduced and compared with both a simple and a masked breeding scheme. Instead of directly multiplying each forecast perturbation with a constant or regional rescaling factor as in the simple form of breeding and the masked breeding schemes, the ETKF transforms forecast perturbations into analysis perturbations by multiplying by a transformation matrix. This matrix is chosen to ensure that the ensemble-based analysis error covariance matrix would be equal to the true analysis error covariance if the covariance matrix of the raw forecast perturbations were equal to the true forecast error covariance matrix and the data assimilation scheme were optimal. For small ensembles (∼100), the computational expense of the ETKF ensemble generation is only slightly greater than that of the masked breeding scheme.

Version 3 of the Community Climate Model (CCM3) developed at National Center for Atmospheric Research (NCAR) is used to test and compare these ensemble generation schemes. The NCEP–NCAR reanalysis data for the boreal summer in 2000 are used for the initialization of the control forecast and the verifications of the ensemble forecasts. The ETKF and masked breeding ensemble variances at the analysis time show reasonable correspondences between variance and observational density. Examination of eigenvalue spectra of ensemble covariance matrices demonstrates that while the ETKF maintains comparable amounts of variance in all orthogonal and uncorrelated directions spanning its ensemble perturbation subspace, both breeding techniques maintain variance in few directions. The growth of the linear combination of ensemble perturbations that maximizes energy growth is computed for each of the ensemble subspaces. The ETKF maximal amplification is found to significantly exceed that of the breeding techniques. The ETKF ensemble mean has lower root-mean-square errors than the mean of the breeding ensemble. New methods to measure the precision of the ensemble-estimated forecast error variance are presented. All of the methods indicate that the ETKF estimates of forecast error variance are considerably more accurate than those of the breeding techniques.

Full access
Derek J. Posselt and Craig H. Bishop
Full access
Derek J. Posselt and Craig H. Bishop

Abstract

This paper explores the temporal evolution of cloud microphysical parameter uncertainty using an idealized 1D model of deep convection. Model parameter uncertainty is quantified using a Markov chain Monte Carlo (MCMC) algorithm. A new form of the ensemble transform Kalman smoother (ETKS) appropriate for the case where the number of ensemble members exceeds the number of observations is then used to obtain estimates of model uncertainty associated with variability in model physics parameters. Robustness of the parameter estimates and ensemble parameter distributions derived from ETKS is assessed via comparison with MCMC. Nonlinearity in the relationship between parameters and model output gives rise to a non-Gaussian posterior probability distribution for the parameters that exhibits skewness early and multimodality late in the simulation. The transition from unimodal to multimodal posterior probability density function (PDF) reflects the transition from convective to stratiform rainfall. ETKS-based estimates of the posterior mean are shown to be robust, as long as the posterior PDF has a single mode. Once multimodality manifests in the solution, the MCMC posterior parameter means and variances differ markedly from those from the ETKS. However, it is also shown that if the ETKS is given a multimode prior ensemble, multimodality is preserved in the ETKS posterior analysis. These results suggest that the primary limitation of the ETKS is not the inability to deal with multimodal, non-Gaussian priors. Rather it is the inability of the ETKS to represent posterior perturbations as nonlinear functions of prior perturbations that causes the most profound difference between MCMC posterior PDFs and ETKS posterior PDFs.

Full access
Craig H. Bishop and Alan J. Thorpe

Abstract

In this paper, the role of horizontal deformation and the associated frontogenetic ageostrophic circulation in suppressing the development of nonlinear waves is assessed. Unless linear barotropic frontal waves can become nonlinear, the associated horizontal transports of momentum will not be sufficient to halt frontogenesis or to create nonlinear mixing processes such as vortex roll-up. The analysis of Dritschel et al. suggests that such nonlinear phenomena will not occur if the wave slope remains small. For the linear model described in Part I, a simple relationship between optimal wave slope amplification over a specified time period and the amplification of an initially isolated edge wave is found. Using this relationship, the mechanisms by which strain affects the dependence of optimal wave slope amplification on wavelength and the time of entry of disturbances to the front are investigated. It is found that waves entering the frontal zone when it is intense can experience greater steepening than those appearing earlier in the development of the front. The most rapidly growing waves enter the front with a wavelength about three times the width of the front. As the front collapses, the ratio of wavelength to frontal width rapidly increases. For strain rates greater than 0.6 × 10−5 s−1, the model predicts that wave slope amplification greater than a factor of e is impossible.

The variation of optimal growth with wavenumber and the time of entry of disturbances to the front is explained using diagnostics based on a mathematical model of Bretherton's qualitative description of wave growth in terms of the interaction of counterpropagating edge waves. These diagnostics yield a simple formula for the frontogenesis rate required to completely eliminate wave steepening. For the front considered in Part I, the formula predicts that no amplification is possible for strain rates greater than one-quarter of the Coriolis parameter. Diagnostics of this sort may aid attempts to predict, from the large-scale forcing, the minimum attainable cross-frontal scale of a front.

Full access
Elizabeth A. Satterfield and Craig H. Bishop

Abstract

Ensemble variances provide a prediction of the flow-dependent error variance of the ensemble mean or, possibly, a high-resolution forecast. However, small ensemble size, unaccounted for model error, and imperfections in ensemble generation schemes cause the predictions of error variance to be imperfect. In previous work, the authors developed an analytic approximation to the posterior distribution of true error variances, given an imperfect ensemble prediction, based on parameters recovered from long archives of innovation and ensemble variance pairs. This paper shows how heteroscedastic postprocessing enables climatological information to be blended with ensemble forecast information when information about the distribution of true error variances given an ensemble sample variance is available. A hierarchy of postprocessing methods are described, each graded on the amount of information about the posterior distribution of error variances used in the postprocessing. These homoscedastic methods are used to assess the value of knowledge of the mean and variance of the posterior distribution of error variances to ensemble postprocessing and explore sensitivity to various parameter regimes. Testing was performed using both synthetic data and operational ensemble forecasts of a Gaussian-distributed variable, to provide a proof-of-concept demonstration in a semi-idealized framework. Rank frequency histograms, weather roulette, continuous ranked probability score, and spread-skill diagrams are used to quantify the value of information about the posterior distribution of error variances. It is found that ensemble postprocessing schemes that utilize the full distribution of error variances given the ensemble sample variance outperform those that do not.

Full access