Search Results

You are looking at 1 - 8 of 8 items for :

  • Author or Editor: Gary L. Achtemeier x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
Gary L. Achtemeier

Abstract

Primitive equation initialization, applicable to numerical prediction models, has been studied by using principles of variational calculus and the appropriate physical equations: two horizontal momentum, adiabatic energy, continuity, and hydrostatic. Essentially independent observations of the vector wind, geopotential height, and specific volume (though pressure and temperature), weighted according to their relative accuracies of measurement, are introduced into an adjustment functional in a least squares sense. The first variation on the functional is required to vanish and the four-dimensional Euler-Lagrange equations are solved as an elliptic boundary value problem.

The coarse 12 h observation frequency creates special problems since the variational formulation is explicit in time. A method accommodating the synoptic time scale was found whereby tendencies could be adjusted by requiring exact satisfaction of the continuity equation. This required a subsidiary variational formulation, the solution of which led to more rapid convergence to overall variational balance.

A test of the variational method using real data required satisfaction of three criteria: agreement with dynamic constraints, minimum adjustment from observed fields, and realistic patterns as could be judged from subjective pattern recognition. It was found that the criteria were satisfied and variationally optimized estimates for all dependent variables including vertical velocity were obtained.

Full access
Gary L. Achtemeier

Abstract

Tiny pressure gradient forces caused by hydrostatic truncation error can overwhelm minuscule pressure gradients that drive shallow nocturnal drainage winds in a mesobeta numerical model. In seeking a method to reduce these errors, a mathematical formulation for pressure gradient force errors was derived for a single coordinate surface bounded by two pressure surfaces. A nonlinear relationship was found between the lapse rate of temperature, the thickness of the bounding pressure layer, the slope of the coordinate surface and the location of the coordinate surface within the pressure layer. The theory shows that pressure gradient force error can be reduced in the numerical model if column pressures are sums of incremental pressures over shallow layers. A series of model simulations verify the theory and show that the theory explains the only source of pressure gradient force error in the model.

Full access
Gary L. Achtemeier

Abstract

There has been a long-standing concept by those who use successive corrections objective analysis that the way to obtain the most accurate objective analysis is first, to analyze for the long wavelengths and then to build in details of the shorter wavelengths by successively decreasing the influence of the more distant observations upon the interpolated values. Using the Barnes method. we compared the filter characteristics for families of response curves that pass through a common point at a reference wavelength. It was found that the filter cutoff is a maximum if the filter parameters that determine the influence of observations are unchanged for both the initial and correction passes. This information was used to define and test the following hypothesis. If accuracy is defined by how well the method retains desired Wavelengths and removes undesired wavelengths, then the Barnes method gives the most accurate analyses if the filter parameters on the initial and correction passes are the same. This hypothesis does not follow the usual conceptual approach to successive corrections analysis.

Theoretical filter response characteristics of the Barnes method were compared for filter parameters set to retrieve the long wavelengths and then build in the short wavelengths with the method for filter parameters set to retrieve the short wavelengths and then build in the long wavelengths. The theoretical results and results from analyses of regularly spaced data show that the customary method of first analyzing for the long wavelengths and then building in the shorter wavelengths is not necessary for the stride correction pass version of the Barnes method. Use of the same filter parameters for initial and correction passes improved the analyses from a fraction of a percent for long wavelengths to about ten percent for short but resolvable wavelengths.

However, the more sparsely and irregularly distributed the data, the less the results are in accord with the predictions of theory. Use of the same filter parameters gave better overall fit to the wavelengths shorter than eight times the minimum resolvable wave and slightly degraded fit to the longer wavelengths. Therefore, in the application of the Barnes method to irregularly spaced data, successively decreasing the influence of the more distant observations is still advisable if longer wavelengths are present in the held of data.

It also was found that no single selection of filter parameters for the two-pass method gives the best analysis for all wavelengths. A three-pass hybrid method is shown to reduce this problem.

Full access
Gary L. Achtemeier

Abstract

The use of objectively analysed fields of meteorological data for complex diagnostic studies and for the initialization of numerical prediction models places the requirements upon the objective method that derivatives of the grided fields be accurate and free from interpolation error. A modification of an objective analysis developed by Barnes provides improvements in analyses of both the field and its derivatives. Theoretical comparisons, between analyses of analytical monochromatic warm and comparisons between analyses of actual weather data are used to show the potential of the new method. The new method restores more of the amplitudes of desired wavelengths while simultaneously filtering more of the amplitudes of undesired wavelengths. These results also hold for the fire and mend derivatives calculated from the gridded fields. Greatest improvements were for the Laplacians of the height field; the new method reduced the variance of undesirable very short wavelengths by 72 percent. Other were found in the divergence of the gridded wind field and near the boundaries of the field of data.

Full access
Gary L. Achtemeier

Abstract

Successive corrections objective analysis techniques frequently are used to array data from limited area without consideration of how the absence of data beyond the boundaries of the network impacts the analysis in the interior of the grid. The problem of data boundaries is studied theoretically by extending the response theory for the Barnes objective analysis method to include boundary effects. The results from the theoretical studies are verified with objective analyses of analytical data. Several important points regarding the objective analysis of limited-area datasets are revealed through this study.

  • Data boundaries impact the objective analysis by reducing the amplitudes of long waves and shifting the phones of short waves. Further, in comparison with the infinite plane response, it is found that truncation or the influence area by limited-area datasets and/or the phase shift of the original wave during the first pass amplified some of the resolvable short waves upon successive corrections to that first pass analysis.

  • The distance that boundary effects intrude into the interior of the grid is inversely related to the weight function shape parameter. Attempts to reduce boundary impacts by producing a smooth analysis actually draw boundary effects father into the interior of the network.

  • When analytical test were performed with realistic values for the weight function shape parameters, such as the GEMPAK default criteria, it was found that boundary effects intruded into the interior of the analysis domain a distance equal to the average separation between observations. This does not pose a problem for the analysis of large datasets bemuse sevens rows and columns of the grid can be discarded after the analysis. However, this option way not be possible for the analysis of limited-area datasets because there may not be enough observations.

The results show that, in the analysis of limited-area datasets, the analyst should be prepared to accept that most (probably all) analyses will suffer from the impacts of the boundaries of the data field.

Full access
Gary L. Achtemeier

Abstract

Objective streamline analysis techniques are separated into predictor-only and predictor-corrector methods, and generalized error formulas are derived for each method. Theoretical analysis errors are obtained from the ellipse, hyperbola and sine wave curve families, curves which taken alone or in combination often describe many meteorological flow patterns. The predictor-only method always under-estimated streamline curvature. The predictor-corrector method overestimated curvature where the stop increment was directed toward increasing curvature and underestimated curvature where the step increment was directed toward decreasing curvature. This led to at least a partial compensation which reduced the cumulative error.

Full access
Stanley Q. Kidder and Gary L. Achtemeier

Abstract

No abstract available.

Full access