Search Results

You are looking at 1 - 10 of 51 items for

  • Author or Editor: Craig H. Bishop x
  • Refine by Access: All Content x
Clear All Modify Search
Craig H. Bishop

Abstract

Free space Green's functions may be used to reconstruct the wind over a limited domain from vorticity, divergence, and the wind at the boundary of the domain. When standard finite-difference estimates of the vorticity/divergence field are used, the technique fails to accurately reconstruct the wind in regions where the vorticity or divergence change markedly between grid boxes.

The standard finite-difference estimate of vorticity/divergence is accurate provided that the wind varies linearly with distance along the edges of grid boxes. An estimate of vorticity/divergence is derived that accounts for the fact that these requirements are not met when the vorticity/divergence field is locally heterogeneous. This improved estimate is named the G4 estimate. It is not derived from Taylor series.

When the G4 estimate of vorticity/divergence is used to reconstruct the wind field, the magnitude of the reconstruction error is an order of magnitude smaller than the reconstruction error associated with the standard estimate of vorticity/divergence.

Full access
Craig H. Bishop

Abstract

Theories of frontogenesis and frontal waves describe development in terms of the interaction of a basic state or environmental flow with a frontal flow. The basic-state flow may comprise a large-scale confluent–diffluent deformation field and/or an alongfront temperature gradient. The frontal flow is seen as evolving as a result of its interaction with the environmental flow. Such theories make specific predictions about the effect of the basic-state flow on the frontal flow. To test these predictions, counterparts of the basic-state flows and frontal flows used in theoretical models must be extracted from atmospheric data. Here the concept of attribution is used to identify such counterparts.

In the present context, attribution refers to the process whereby a part of the wind field is attributed to a part of the vorticity or divergence field. It is mathematically equivalent to the process by which a part of a field of electric potential is associated with an element of total charge density in electrostatics.

The counterpart of the frontal flow used in idealized models is identified as that part of the flow attributable to the vorticity and divergence anomalies within the frontal region. The counterpart of the basic-state flow is identified as that part of the flow attributable to vorticity and divergence anomalies outside the frontal region.

Applications of the partitioning method are illustrated by diagnosing the flow associated with a North Atlantic front. The way in which the partitioning method may be used to test some theories concerning the effect of large-scale deformation on frontal wave formation is described. The partitioning method's ability to distinguish frontogenesis due to environmental flow from that due to frontal flow is also discussed. The analyzed front is found to lie at an angle to the dilatation axis of the environmental flow. It is argued that this feature must be common to all nonrotating finite length fronts.

Full access
Brian J. Etherton
and
Craig H. Bishop

Abstract

Previous idealized numerical experiments have shown that a straightforward augmentation of an isotropic error correlation matrix with an ensemble-based error correlation matrix yields an improved data assimilation scheme under certain conditions. Those conditions are (a) the forecast model is perfect and (b) the ensemble accurately samples the probability distribution function of forecast errors. Such schemes blend characteristics of ensemble Kalman filter analysis schemes with three-dimensional variational data assimilation (3DVAR) analysis schemes and are called hybrid schemes. Here, we test the robustness of hybrid schemes to model error and ensemble inaccuracy in the context of a numerically simulated two-dimensional turbulent flow. The turbulence is produced by a doubly periodic barotropic vorticity equation model that is constantly relaxing to a barotropically unstable state. The types of forecast models considered include a perfect model, a model with a resolution error, and a model with a parameterization error. The ensemble generation schemes considered include the breeding scheme, the singular vector scheme, the perturbed observations system simulation scheme, a gridpoint noise scheme, and a scheme based on the ensemble transform Kalman filter (ETKF). For all combinations examined, it is found that the hybrid schemes outperform the 3DVAR scheme. In the presence of model error a perturbed observations hybrid and a singular vector hybrid perform best, though the ETKF ensemble is competitive.

Full access
Craig H. Bishop
and
Alan J. Thorpe

Abstract

It has been shown that lower tropospheric potential vorticity zones formed during moist deformation frontogenesis will support growing waves if at some time the frontogenesis ceases. In this paper, the ways in which these waves are affected by the frontogenetic process are identified.

Observations show that fronts in the eastern Atlantic commonly feature saturated ascent regions characterized by zero moist potential vorticity. Furthermore, in many cases the horizontal temperature gradient in the lowest one to two kilometers of the atmosphere is rather weak. These features are incorporated in an analytical archetype. The dynamical implications of saturated ascent in conditions of zero moist potential vorticity are represented in the model by assuming that adiabatic temperature changes are precisely balanced by diabatic tendencies. The observed small temperature gradient at low levels is represented in the model by taking it to be zero in the lowest two kilometers. Consequently, the forcing of the low-level moist ageostrophic vortex stretching that strengthens the low-level potential vorticity anomaly is confined to middle and upper levels.

A semianalytical initial value solution for the linear development of waves on the evolving low-level potential vorticity anomaly is obtained. The waves approximately satisfy the inviscid primitive equations whenever the divergent part of the perturbation is negligible relative to the rotational part. The range of nonmodal wave developments supported by the front is summarized using RT phase diagrams. This analysis shows that the most dramatic effects of frontogenesis on frontal wave growth are due to (a) the increase in time of the potential vorticity and hence potential instability of the flow and (b) the increase in time of the alongfront wavelength relative to the width of the strip. An optimally growing streamfunction wave is described. Finally, a diagnostic technique suitable for identifying small amplitude frontal waves in observational data is described.

Full access
Craig H. Bishop
and
Alan J. Thorpe

Abstract

In this paper, the role of horizontal deformation and the associated frontogenetic ageostrophic circulation in suppressing the development of nonlinear waves is assessed. Unless linear barotropic frontal waves can become nonlinear, the associated horizontal transports of momentum will not be sufficient to halt frontogenesis or to create nonlinear mixing processes such as vortex roll-up. The analysis of Dritschel et al. suggests that such nonlinear phenomena will not occur if the wave slope remains small. For the linear model described in Part I, a simple relationship between optimal wave slope amplification over a specified time period and the amplification of an initially isolated edge wave is found. Using this relationship, the mechanisms by which strain affects the dependence of optimal wave slope amplification on wavelength and the time of entry of disturbances to the front are investigated. It is found that waves entering the frontal zone when it is intense can experience greater steepening than those appearing earlier in the development of the front. The most rapidly growing waves enter the front with a wavelength about three times the width of the front. As the front collapses, the ratio of wavelength to frontal width rapidly increases. For strain rates greater than 0.6 × 10−5 s−1, the model predicts that wave slope amplification greater than a factor of e is impossible.

The variation of optimal growth with wavenumber and the time of entry of disturbances to the front is explained using diagnostics based on a mathematical model of Bretherton's qualitative description of wave growth in terms of the interaction of counterpropagating edge waves. These diagnostics yield a simple formula for the frontogenesis rate required to completely eliminate wave steepening. For the front considered in Part I, the formula predicts that no amplification is possible for strain rates greater than one-quarter of the Coriolis parameter. Diagnostics of this sort may aid attempts to predict, from the large-scale forcing, the minimum attainable cross-frontal scale of a front.

Full access
Craig H. Bishop
and
Zoltan Toth

Abstract

Suppose that the geographical and temporal resolution of the observational network could be changed on a daily basis. Of all the possible deployments of observational resources, which particular deployment would minimize expected forecast error? The ensemble transform technique answers such questions by using nonlinear ensemble forecasts to rapidly construct ensemble-based approximations to the prediction error covariance matrices associated with a wide range of different possible deployments of observational resources. From these matrices, estimates of the expected forecast error associated with each distinct deployment of observational resources are obtained. The deployment that minimizes the chosen measure of forecast error is deemed optimal.

The technique may also be used to find the perturbation that evolves into the leading eigenvector or singular vector of an ensemble-based prediction error covariance matrix. This time-evolving perturbation “explains” more of the ensemble-based prediction error variance than any other perturbation. It may be interpreted as the fastest growing perturbation on the subspace of ensemble perturbations.

The ensemble-based approximations to the prediction error covariance matrices are constructed from transformation matrices derived from estimates of the analysis error covariance matrices associated with each possible deployment of observational resources. The authors prove that the ensemble transform technique would precisely recover the prediction error covariance matrices associated with each possible deployment of observational resources provided that (i) estimates of the analysis error covariance matrix were precise, (ii) the ensemble perturbations span the vector space of all possible perturbations, and (iii) the evolution of errors were linear and perfectly modeled. In the absence of such precise information, the ensemble transform technique links available information on analysis error covariances associated with different observational networks with error growth estimates contained in the ensemble forecast to estimate the optimal configuration of an adaptive observational network. Tests of the technique will be presented in subsequent publications. Here, the objective is to describe the theoretical basis of the technique and illustrate it with an example from the Fronts and Atlantic Storm Tracks Experiment (FASTEX).

Full access
Xuguang Wang
and
Craig H. Bishop

Abstract

The ensemble transform Kalman filter (ETKF) ensemble forecast scheme is introduced and compared with both a simple and a masked breeding scheme. Instead of directly multiplying each forecast perturbation with a constant or regional rescaling factor as in the simple form of breeding and the masked breeding schemes, the ETKF transforms forecast perturbations into analysis perturbations by multiplying by a transformation matrix. This matrix is chosen to ensure that the ensemble-based analysis error covariance matrix would be equal to the true analysis error covariance if the covariance matrix of the raw forecast perturbations were equal to the true forecast error covariance matrix and the data assimilation scheme were optimal. For small ensembles (∼100), the computational expense of the ETKF ensemble generation is only slightly greater than that of the masked breeding scheme.

Version 3 of the Community Climate Model (CCM3) developed at National Center for Atmospheric Research (NCAR) is used to test and compare these ensemble generation schemes. The NCEP–NCAR reanalysis data for the boreal summer in 2000 are used for the initialization of the control forecast and the verifications of the ensemble forecasts. The ETKF and masked breeding ensemble variances at the analysis time show reasonable correspondences between variance and observational density. Examination of eigenvalue spectra of ensemble covariance matrices demonstrates that while the ETKF maintains comparable amounts of variance in all orthogonal and uncorrelated directions spanning its ensemble perturbation subspace, both breeding techniques maintain variance in few directions. The growth of the linear combination of ensemble perturbations that maximizes energy growth is computed for each of the ensemble subspaces. The ETKF maximal amplification is found to significantly exceed that of the breeding techniques. The ETKF ensemble mean has lower root-mean-square errors than the mean of the breeding ensemble. New methods to measure the precision of the ensemble-estimated forecast error variance are presented. All of the methods indicate that the ETKF estimates of forecast error variance are considerably more accurate than those of the breeding techniques.

Full access
Craig H. Bishop
and
Kevin T. Shanley

Abstract

Methods of ensemble postprocessing in which continuous probability density functions are constructed from ensemble forecasts by centering functions around each of the ensemble members have come to be called Bayesian model averaging (BMA) or “dressing” methods. Here idealized ensemble forecasting experiments are used to show that these methods are liable to produce systematically unreliable probability forecasts of climatologically extreme weather. It is argued that the failure of these methods is linked to an assumption that the distribution of truth given the forecast can be sampled by adding stochastic perturbations to state estimates, even when these state estimates have a realistic climate. It is shown that this assumption is incorrect, and it is argued that such dressing techniques better describe the likelihood distribution of historical ensemble-mean forecasts given the truth for certain values of the truth. This paradigm shift leads to an approach that incorporates prior climatological information into BMA ensemble postprocessing through Bayes’s theorem. This new approach is shown to cure BMA’s ill treatment of extreme weather by providing a posterior BMA distribution whose probabilistic forecasts are reliable for both extreme and nonextreme weather forecasts.

Full access
Craig H. Bishop
and
Elizabeth A. Satterfield

Abstract

A conundrum of predictability research is that while the prediction of flow-dependent error distributions is one of its main foci, chaos fundamentally hides flow-dependent forecast error distributions from empirical observation. Empirical estimation of such error distributions requires a large sample of error realizations given the same flow-dependent conditions. However, chaotic elements of the flow and the observing network make it impossible to collect a large enough conditioned error sample to empirically define such distributions and their variance. Such conditional variances are “hidden.” Here, an exposition of the problem is developed from an ensemble Kalman filter data assimilation system applied to a 10-variable nonlinear chaotic model and 25 000 replicate models. The 25 000 replicates reveal the error variances that would otherwise be hidden. It is found that the inverse-gamma distribution accurately approximates the posterior distribution of conditional error variances given an imperfect ensemble variance and provides a reasonable approximation to the prior climatological distribution of conditional error variances. A new analytical model shows how the properties of a likelihood distribution of ensemble variances given a true conditional error variance determine the posterior distribution of error variances given an ensemble variance. The analytically generated distributions are shown to satisfactorily fit empirically determined distributions. The theoretical analysis yields a rigorous interpretation and justification of hybrid error variance models that linearly combine static and flow-dependent estimates of forecast error variance; in doing so, it also helps justify and inform hybrid error covariance models.

Full access
Elizabeth A. Satterfield
and
Craig H. Bishop

Abstract

Ensemble variances provide a prediction of the flow-dependent error variance of the ensemble mean or, possibly, a high-resolution forecast. However, small ensemble size, unaccounted for model error, and imperfections in ensemble generation schemes cause the predictions of error variance to be imperfect. In previous work, the authors developed an analytic approximation to the posterior distribution of true error variances, given an imperfect ensemble prediction, based on parameters recovered from long archives of innovation and ensemble variance pairs. This paper shows how heteroscedastic postprocessing enables climatological information to be blended with ensemble forecast information when information about the distribution of true error variances given an ensemble sample variance is available. A hierarchy of postprocessing methods are described, each graded on the amount of information about the posterior distribution of error variances used in the postprocessing. These homoscedastic methods are used to assess the value of knowledge of the mean and variance of the posterior distribution of error variances to ensemble postprocessing and explore sensitivity to various parameter regimes. Testing was performed using both synthetic data and operational ensemble forecasts of a Gaussian-distributed variable, to provide a proof-of-concept demonstration in a semi-idealized framework. Rank frequency histograms, weather roulette, continuous ranked probability score, and spread-skill diagrams are used to quantify the value of information about the posterior distribution of error variances. It is found that ensemble postprocessing schemes that utilize the full distribution of error variances given the ensemble sample variance outperform those that do not.

Full access