Search Results

You are looking at 1 - 10 of 25 items for

  • Author or Editor: Herschel L. Mitchell x
  • Refine by Access: All Content x
Clear All Modify Search
Herschel L. Mitchell

Abstract

Full access
Herschel L. Mitchell and Jacques Derome

Abstract

The resonance of stationary waves forced by topography is examined using a quasi-geostrophic model on a beta-plane channel. It is shown analytically that among the factors favoring the resonance of large, rather than synoptic or small, scale waves is the fact that the sensitivity of the large resonant responses to a change of zonal wind decreases as the scale of the resonant wave increases. A numerical model is used to examine resonance in the presence of topography having zonal wavenumber 2 with zonal flows having horizontal and vertical shear and including the effects of damping and nonlinear interactions. Although the effects of resonance are found to be important even in the presence of damping mechanisms, linear experiments with topographical forcing of reasonable amplitude indicate that a period or several weeks is required for a resonant internal mode to achieve large amplitude in the troposphere. However, as the structure of the resonant mode is such that it has much larger amplitudes in the upper atmosphere than in the troposphere, the interaction between this growing resonant mode and the mean flow which occurs when nonlinear effects are permitted triggers a stratospheric warming and zonal wind reversal. These events, which drive the system off resonance, occur long before large wave amplitudes are achieved in the lower atmosphere. The barotropic mode of zonal wavenumber 2 is shown not to resonate for reasonable values of our mean zonal wind primarily because the latter has the same (sinusoidal) meridional structure as the topography.

Full access
Herschel L. Mitchell and Jacques Derome

Abstract

Three-dimensional flows for which q=−λ(p)ψ where q is the potential vorticity, ψ the stream function and λ some arbitrary function of pressure, are examined. It is found that flows which satisfy this condition and are quite similar to atmospheric blocking patterns can be generated by the superposition of a zonal current independent of the meridional coordinate plus two eddy components. These flows, for which the Jacobian of ψ and q is zero, are of interest because 1) in the absence of forcing they constitute steady state solutions of the potential vorticity equation; and 2) the possibility exists that they can be forced resonantly to a finite amplitude by means of a potential vorticity source. The arbitrariness in the choice of λ is removed by specifying the vertical profile of the diabatic heating. It is shown that when the latter is a linear function of Pressure the resultant forced flow is nearly equivalent barotropic, stable to small amplitude perturbations, with a tendency for the blocking patterns to become somewhat move prominent with increasing pressure, in rather good agreement with observations of blocking highs.

By integration of a three-level beta-plant model in time, it is shown that it is indeed possible, in the absence of dissipation, to thermally fore the above types of flows at resonance and to generate flow patterns that are quite similar to atmospheric blocking patterns. It is also shown that even when a rather broad spectrum of modes is thermally forced, the above resonant modes tend to dominate the flow, in spite of the possible interaction among modes. This would imply that provided the mean zonal flow has the proper strength to produce a resonance condition, the thermal forcing field need not have a very special structure to produce a finite amplitude disturbance through resonance.

Full access
P. L. Houtekamer and Herschel L. Mitchell

Abstract

No abstract available.

Full access
Herschel L. Mitchell and P. L. Houtekamer

Abstract

To the extent that model error is nonnegligible in numerical models of the atmosphere, it must be accounted for in 4D atmospheric data assimilation systems. In this study, a method of estimating and accounting for model error in the context of an ensemble Kalman filter technique is developed. The method involves parameterizing the model error and using innovations to estimate the model-error parameters. The estimation algorithm is based on a maximum likelihood approach and the study is performed in an idealized environment using a three-level, quasigeostrophic, T21 model and simulated observations and model error.

The use of a limited number of ensemble members gives rise to a rank problem in the estimate of the covariance matrix of the innovations. The effect of this problem on the two terms of the log-likelihood function is that the variance term is underestimated, while the χ 2 term is overestimated. To permit the use of relatively small ensembles, a number of strategies are developed to deal with these systematic estimation problems. These include the imposition of a block structure on the covariance matrix of the innovations and a Richardson extrapolation of the log-likelihood value to infinite ensemble size. It is shown that with the use of these techniques, estimates of the model-error parameters are quite acceptable in a statistical sense, even though estimates based on any single innovation vector can be poor.

It is found that, with temporal smoothing of the model-error parameter estimates, the adaptive ensemble Kalman filter produces fairly good estimates of the parameters and accounts rather well for the model error. In fact, its performance in a data assimilation cycle is almost as good as that of a cycle in which the correct model-error parameters are used to increase the spread in the ensemble.

Full access
Herschel L. Mitchell and P. L. Houtekamer

Abstract

This paper examines ensemble Kalman filter (EnKF) performance for a number of different EnKF configurations. The study is performed in a perfect-model context using the logistic map as forecast model. The focus is on EnKF performance when the ensemble is small. In accordance with theory, it is found that those configurations that maintain an appropriate ensemble spread are indeed those with the smallest ensemble mean error in a data assimilation cycle. Thus, the deficient ensemble spread produced by the single-ensemble EnKF results in increased ensemble mean error for this configuration. This problem with the conceptually simplest EnKF motivates an examination of a variety of other configurations. These include the configuration with a pair of ensembles and several configurations with overlapping ensembles, such as the four-subensemble configuration (used operationally at the Canadian Meteorological Centre) and the configuration in which observations are assimilated into each member using a gain computed from all of the other members. Also examined is a configuration that uses the jackknife estimator to obtain an estimate of the gain and an estimate of its uncertainty. Using these estimates, a different perturbed gain is then produced for each ensemble member. In general, it is found that these latter configurations outperform both the single-ensemble EnKF and the configuration with a pair of ensembles. In addition to these “stochastic” filters, the performance of a “deterministic” filter (which does not use perturbed observations) is also examined.

Full access
P. L. Houtekamer and Herschel L. Mitchell

Abstract

An ensemble Kalman filter may be considered for the 4D assimilation of atmospheric data. In this paper, an efficient implementation of the analysis step of the filter is proposed. It employs a Schur (elementwise) product of the covariances of the background error calculated from the ensemble and a correlation function having local support to filter the small (and noisy) background-error covariances associated with remote observations. To solve the Kalman filter equations, the observations are organized into batches that are assimilated sequentially. For each batch, a Cholesky decomposition method is used to solve the system of linear equations. The ensemble of background fields is updated at each step of the sequential algorithm and, as more and more batches of observations are assimilated, evolves to eventually become the ensemble of analysis fields.

A prototype sequential filter has been developed. Experiments are performed with a simulated observational network consisting of 542 radiosonde and 615 satellite-thickness profiles. Experimental results indicate that the quality of the analysis is almost independent of the number of batches (except when the ensemble is very small). This supports the use of a sequential algorithm.

A parallel version of the algorithm is described and used to assimilate over 100 000 observations into a pair of 50-member ensembles. Its operation count is proportional to the number of observations, the number of analysis grid points, and the number of ensemble members. In view of the flexibility of the sequential filter and its encouraging performance on a NEC SX-4 computer, an application with a primitive equations model can now be envisioned.

Full access
P. L. Houtekamer and Herschel L. Mitchell

Abstract

The possibility of performing data assimilation using the flow-dependent statistics calculated from an ensemble of short-range forecasts (a technique referred to as ensemble Kalman filtering) is examined in an idealized environment. Using a three-level, quasigeostrophic, T21 model and simulated observations, experiments are performed in a perfect-model context. By using forward interpolation operators from the model state to the observations, the ensemble Kalman filter is able to utilize nonconventional observations.

In order to maintain a representative spread between the ensemble members and avoid a problem of inbreeding, a pair of ensemble Kalman filters is configured so that the assimilation of data using one ensemble of short-range forecasts as background fields employs the weights calculated from the other ensemble of short-range forecasts. This configuration is found to work well: the spread between the ensemble members resembles the difference between the ensemble mean and the true state, except in the case of the smallest ensembles.

A series of 30-day data assimilation cycles is performed using ensembles of different sizes. The results indicate that (i) as the size of the ensembles increases, correlations are estimated more accurately and the root-mean-square analysis error decreases, as expected, and (ii) ensembles having on the order of 100 members are sufficient to accurately describe local anisotropic, baroclinic correlation structures. Due to the difficulty of accurately estimating the small correlations associated with remote observations, a cutoff radius beyond which observations are not used, is implemented. It is found that (a) for a given ensemble size there is an optimal value of this cutoff radius, and (b) the optimal cutoff radius increases as the ensemble size increases.

Full access
Andrew N. Staniforth and Herschel L. Mitchell

Abstract

A barotropic primitive equation model using the finite-element method of space discretization is presented. The semi-implicit method of time discretization is implemented by taking a suitable form of the governing equations. Finite-element forecasts are then compared with those from both finite-difference and spectral models. From the point of view of both efficiency and accuracy it is concluded that the finite-element method of space discretization appears to he viable for the practical problem of numerical weather prediction.

Full access
Andrew N. Staniforth and Herschel L. Mitchell

Abstract

A barotropic primitive-equation model using the finite-element method of space discretization is generalized to allow variable resolution. The overhead incurred in going from a uniform mesh to a variable mesh having the same number of degrees of freedom is found to be approximately 20% overall.

The variable-mesh model is used with several grid configurations, each having uniform high resolution over a specified area of interest and lower resolution elsewhere to produce short-term forecasts over this area without the necessity of high resolution everywhere. It is found that the forecast produced on a uniform high-resolution mesh can be essentially reproduced for a limited time over the limited area by a variable-mesh model having only a fraction of the number of degrees of freedom and requiring significantly less computer time. As expected, the period of validity of forecasts on variable meshes can be lengthened by refining the mesh in the outer region.

It is concluded that from the point of view of efficiency, accuracy and stability the variable-mesh finite-element technique appears to be well-suited to the practical problem of limited area/time forecasting.

Full access