Search Results

You are looking at 1 - 10 of 15 items for

  • Author or Editor: Gary L. Achtemeier x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
Gary L. Achtemeier

Abstract

The use of objectively analysed fields of meteorological data for complex diagnostic studies and for the initialization of numerical prediction models places the requirements upon the objective method that derivatives of the grided fields be accurate and free from interpolation error. A modification of an objective analysis developed by Barnes provides improvements in analyses of both the field and its derivatives. Theoretical comparisons, between analyses of analytical monochromatic warm and comparisons between analyses of actual weather data are used to show the potential of the new method. The new method restores more of the amplitudes of desired wavelengths while simultaneously filtering more of the amplitudes of undesired wavelengths. These results also hold for the fire and mend derivatives calculated from the gridded fields. Greatest improvements were for the Laplacians of the height field; the new method reduced the variance of undesirable very short wavelengths by 72 percent. Other were found in the divergence of the gridded wind field and near the boundaries of the field of data.

Full access
Gary L. Achtemeier

Abstract

Primitive equation initialization, applicable to numerical prediction models, has been studied by using principles of variational calculus and the appropriate physical equations: two horizontal momentum, adiabatic energy, continuity, and hydrostatic. Essentially independent observations of the vector wind, geopotential height, and specific volume (though pressure and temperature), weighted according to their relative accuracies of measurement, are introduced into an adjustment functional in a least squares sense. The first variation on the functional is required to vanish and the four-dimensional Euler-Lagrange equations are solved as an elliptic boundary value problem.

The coarse 12 h observation frequency creates special problems since the variational formulation is explicit in time. A method accommodating the synoptic time scale was found whereby tendencies could be adjusted by requiring exact satisfaction of the continuity equation. This required a subsidiary variational formulation, the solution of which led to more rapid convergence to overall variational balance.

A test of the variational method using real data required satisfaction of three criteria: agreement with dynamic constraints, minimum adjustment from observed fields, and realistic patterns as could be judged from subjective pattern recognition. It was found that the criteria were satisfied and variationally optimized estimates for all dependent variables including vertical velocity were obtained.

Full access
Gary L. Achtemeier

Abstract

No abstract available.

Full access
Gary L. Achtemeier

Abstract

Tiny pressure gradient forces caused by hydrostatic truncation error can overwhelm minuscule pressure gradients that drive shallow nocturnal drainage winds in a mesobeta numerical model. In seeking a method to reduce these errors, a mathematical formulation for pressure gradient force errors was derived for a single coordinate surface bounded by two pressure surfaces. A nonlinear relationship was found between the lapse rate of temperature, the thickness of the bounding pressure layer, the slope of the coordinate surface and the location of the coordinate surface within the pressure layer. The theory shows that pressure gradient force error can be reduced in the numerical model if column pressures are sums of incremental pressures over shallow layers. A series of model simulations verify the theory and show that the theory explains the only source of pressure gradient force error in the model.

Full access
Gary L. Achtemeier

Abstract

Objective streamline analysis techniques are separated into predictor-only and predictor-corrector methods, and generalized error formulas are derived for each method. Theoretical analysis errors are obtained from the ellipse, hyperbola and sine wave curve families, curves which taken alone or in combination often describe many meteorological flow patterns. The predictor-only method always under-estimated streamline curvature. The predictor-corrector method overestimated curvature where the stop increment was directed toward increasing curvature and underestimated curvature where the step increment was directed toward decreasing curvature. This led to at least a partial compensation which reduced the cumulative error.

Full access
Gary L. Achtemeier

Abstract

Rainfall, wind and temperature data at the surface for a mesoscale area surrounding St. Louis, Missouri for seven summer days in 1975 were used to determine qualitative and quantitative relationships between divergence, and the location, timing and intensity of rainfall. The study used 30 prominent raincells that formed over a 1600 km2 wind network under different synoptic and subsynoptic weather conditions.

The results indicate that the physical relationships between convergence and convective rainfall over the Middle West are quite complex. Widespread convective rainfall seldom occurred in the absence of some larger scale forcing. Possible forcing mechanisms occurred at various levels within the troposphere and over several different space scales. On the cell scale, convergence preceded some raincells but not others. Potential mechanisms that could explain the dichotomy were investigated.

The results from several statistical studies include:

1) Pre-rain average network convergence was weakly related to precipitation amount.

2) Significant changes in the subnetwork-scale convergence began as early as 75–90 min before the rain. Correlations maximized at 0.55, 15 min prior to rainfall.

3) Convergence centers that were spatially linked with raincells became established 30 min prior to rainfall on the average, but relatively large correlations (0.55) for these centers were round only at 15 min before the rain began.

4) The Ulanski and Garstang spatial index and a second index of convergence strength developed in this research explained about 10% of the rainfall variance for a subset of raincells for which rainfall totalled more than 1 cm.

Full access
Gary L. Achtemeier

Abstract

Operational and experimental convective cloud-seeding projects are often planned without regard to the number of seeding-opportunity days that can be lost because of the need to suspend operations during the threat of severe weather. June daily rainfall, severe storm and tornado watches, and observed tornadoes within a hypothetical (proposed) operational area over southwest Kansas were compared within the context of five procedures for severe weather related operations suspensions. These procedures varied in the restrictions placed on operations. The results show that anywhere from 45–87% of the June rain can fall when operations have been suspended. The length of a scientific seeding experiment could be increased anywhere from 45–426%. Finally, 46% of the tornadoes occurred when there were no concurrent tornado watches. This failure rate is so large that severe weather watches may not be useful for operations suspension procedures.

Full access
Gary L. Achtemeier

Abstract

Smoke from wildland burning in association with fog has been implicated as a visibility hazard over roadways in the United States. Visibilities at accident sites have been estimated in the range from 1 to 3 m (extinction coefficients between 1000 and 4000). Temperature and relative humidity measurements were taken from 29 “smokes” during 2002 and 2003. These data were converted to a measure of the mass of water vapor present to the mass of dry air containing the vapor (smoke mixing ratio). Smoke temperatures were processed through a simple radiation model before smokes were mixed with ambient air with temperature and moisture observed during the early morning on the days following the burns. Calculations show supersaturations implying liquid water contents (LWC) up to 17 times as large as LWC found in natural fog. Simple models combining fog droplet number density, droplet size, and LWC show that the supersaturation LWC of smokes is capable of reducing visibility to the ranges observed.

Full access
Gary L. Achtemeier

Abstract

There has been a long-standing concept by those who use successive corrections objective analysis that the way to obtain the most accurate objective analysis is first, to analyze for the long wavelengths and then to build in details of the shorter wavelengths by successively decreasing the influence of the more distant observations upon the interpolated values. Using the Barnes method. we compared the filter characteristics for families of response curves that pass through a common point at a reference wavelength. It was found that the filter cutoff is a maximum if the filter parameters that determine the influence of observations are unchanged for both the initial and correction passes. This information was used to define and test the following hypothesis. If accuracy is defined by how well the method retains desired Wavelengths and removes undesired wavelengths, then the Barnes method gives the most accurate analyses if the filter parameters on the initial and correction passes are the same. This hypothesis does not follow the usual conceptual approach to successive corrections analysis.

Theoretical filter response characteristics of the Barnes method were compared for filter parameters set to retrieve the long wavelengths and then build in the short wavelengths with the method for filter parameters set to retrieve the short wavelengths and then build in the long wavelengths. The theoretical results and results from analyses of regularly spaced data show that the customary method of first analyzing for the long wavelengths and then building in the shorter wavelengths is not necessary for the stride correction pass version of the Barnes method. Use of the same filter parameters for initial and correction passes improved the analyses from a fraction of a percent for long wavelengths to about ten percent for short but resolvable wavelengths.

However, the more sparsely and irregularly distributed the data, the less the results are in accord with the predictions of theory. Use of the same filter parameters gave better overall fit to the wavelengths shorter than eight times the minimum resolvable wave and slightly degraded fit to the longer wavelengths. Therefore, in the application of the Barnes method to irregularly spaced data, successively decreasing the influence of the more distant observations is still advisable if longer wavelengths are present in the held of data.

It also was found that no single selection of filter parameters for the two-pass method gives the best analysis for all wavelengths. A three-pass hybrid method is shown to reduce this problem.

Full access