Search Results

You are looking at 1 - 10 of 15 items for

  • Author or Editor: Gary L. Achtemeier x
  • Refine by Access: All Content x
Clear All Modify Search
Gary L. Achtemeier

Abstract

Rainfall, wind and temperature data at the surface for a mesoscale area surrounding St. Louis, Missouri for seven summer days in 1975 were used to determine qualitative and quantitative relationships between divergence, and the location, timing and intensity of rainfall. The study used 30 prominent raincells that formed over a 1600 km2 wind network under different synoptic and subsynoptic weather conditions.

The results indicate that the physical relationships between convergence and convective rainfall over the Middle West are quite complex. Widespread convective rainfall seldom occurred in the absence of some larger scale forcing. Possible forcing mechanisms occurred at various levels within the troposphere and over several different space scales. On the cell scale, convergence preceded some raincells but not others. Potential mechanisms that could explain the dichotomy were investigated.

The results from several statistical studies include:

1) Pre-rain average network convergence was weakly related to precipitation amount.

2) Significant changes in the subnetwork-scale convergence began as early as 75–90 min before the rain. Correlations maximized at 0.55, 15 min prior to rainfall.

3) Convergence centers that were spatially linked with raincells became established 30 min prior to rainfall on the average, but relatively large correlations (0.55) for these centers were round only at 15 min before the rain began.

4) The Ulanski and Garstang spatial index and a second index of convergence strength developed in this research explained about 10% of the rainfall variance for a subset of raincells for which rainfall totalled more than 1 cm.

Full access
Gary L. Achtemeier

Abstract

Single- and multiple-Doppler radar systems are increasingly being used to monitor circulations within the clear-air boundary layer where the scatterers may be gradients of the refractive index or biota or a combination of both. When insects are the primary source of returned radar power, it must be assumed that the insects are either small and are being carried passively in the air, or are flying randomly so that the bulk velocity of all the insects contained within a pulse volume is zero relative to the air.

This study presents dual-polarization radar observations of the interaction between a gust flow and a deep cloud of insects within a relatively unstable air mass over North Dakota on 4 July 1987. These data are unique in that they reveal several meteorological conditions for which the preceding assumption is not valid. The boundary layer was not capped, and circulations rose above an apparent threshold altitude above which these insects were not flying. Temperatures near the threshold altitude were in the range of 10°–15°C. The top of the insect layer remained near 1800 m AGL regardless of circulations that could have carried insects to higher altitudes.

A biological hypothesis of flight response to strong updrafts was developed and tested with dual-polarization data. Localized decreases in differential reflectivity Z DR, interpreted as the result of reorientation of insects in evasive flight were coincident with strong updrafts identified from the analyses of the Doppler velocities.

This study shows that conditions exist for which the insects are not valid tracers of air motion. Therefore, care must be taken that combined insect and wind velocities are not taken as wind velocity alone.

Full access
Gary L. Achtemeier

Abstract

Tiny pressure gradient forces caused by hydrostatic truncation error can overwhelm minuscule pressure gradients that drive shallow nocturnal drainage winds in a mesobeta numerical model. In seeking a method to reduce these errors, a mathematical formulation for pressure gradient force errors was derived for a single coordinate surface bounded by two pressure surfaces. A nonlinear relationship was found between the lapse rate of temperature, the thickness of the bounding pressure layer, the slope of the coordinate surface and the location of the coordinate surface within the pressure layer. The theory shows that pressure gradient force error can be reduced in the numerical model if column pressures are sums of incremental pressures over shallow layers. A series of model simulations verify the theory and show that the theory explains the only source of pressure gradient force error in the model.

Full access
Gary L. Achtemeier

Abstract

There has been a long-standing concept by those who use successive corrections objective analysis that the way to obtain the most accurate objective analysis is first, to analyze for the long wavelengths and then to build in details of the shorter wavelengths by successively decreasing the influence of the more distant observations upon the interpolated values. Using the Barnes method. we compared the filter characteristics for families of response curves that pass through a common point at a reference wavelength. It was found that the filter cutoff is a maximum if the filter parameters that determine the influence of observations are unchanged for both the initial and correction passes. This information was used to define and test the following hypothesis. If accuracy is defined by how well the method retains desired Wavelengths and removes undesired wavelengths, then the Barnes method gives the most accurate analyses if the filter parameters on the initial and correction passes are the same. This hypothesis does not follow the usual conceptual approach to successive corrections analysis.

Theoretical filter response characteristics of the Barnes method were compared for filter parameters set to retrieve the long wavelengths and then build in the short wavelengths with the method for filter parameters set to retrieve the short wavelengths and then build in the long wavelengths. The theoretical results and results from analyses of regularly spaced data show that the customary method of first analyzing for the long wavelengths and then building in the shorter wavelengths is not necessary for the stride correction pass version of the Barnes method. Use of the same filter parameters for initial and correction passes improved the analyses from a fraction of a percent for long wavelengths to about ten percent for short but resolvable wavelengths.

However, the more sparsely and irregularly distributed the data, the less the results are in accord with the predictions of theory. Use of the same filter parameters gave better overall fit to the wavelengths shorter than eight times the minimum resolvable wave and slightly degraded fit to the longer wavelengths. Therefore, in the application of the Barnes method to irregularly spaced data, successively decreasing the influence of the more distant observations is still advisable if longer wavelengths are present in the held of data.

It also was found that no single selection of filter parameters for the two-pass method gives the best analysis for all wavelengths. A three-pass hybrid method is shown to reduce this problem.

Full access
Gary L. Achtemeier

Abstract

Successive corrections objective analysis techniques frequently are used to array data from limited area without consideration of how the absence of data beyond the boundaries of the network impacts the analysis in the interior of the grid. The problem of data boundaries is studied theoretically by extending the response theory for the Barnes objective analysis method to include boundary effects. The results from the theoretical studies are verified with objective analyses of analytical data. Several important points regarding the objective analysis of limited-area datasets are revealed through this study.

  • Data boundaries impact the objective analysis by reducing the amplitudes of long waves and shifting the phones of short waves. Further, in comparison with the infinite plane response, it is found that truncation or the influence area by limited-area datasets and/or the phase shift of the original wave during the first pass amplified some of the resolvable short waves upon successive corrections to that first pass analysis.

  • The distance that boundary effects intrude into the interior of the grid is inversely related to the weight function shape parameter. Attempts to reduce boundary impacts by producing a smooth analysis actually draw boundary effects father into the interior of the network.

  • When analytical test were performed with realistic values for the weight function shape parameters, such as the GEMPAK default criteria, it was found that boundary effects intruded into the interior of the analysis domain a distance equal to the average separation between observations. This does not pose a problem for the analysis of large datasets bemuse sevens rows and columns of the grid can be discarded after the analysis. However, this option way not be possible for the analysis of limited-area datasets because there may not be enough observations.

The results show that, in the analysis of limited-area datasets, the analyst should be prepared to accept that most (probably all) analyses will suffer from the impacts of the boundaries of the data field.

Full access
Gary L. Achtemeier

Abstract

Objective streamline analysis techniques are separated into predictor-only and predictor-corrector methods, and generalized error formulas are derived for each method. Theoretical analysis errors are obtained from the ellipse, hyperbola and sine wave curve families, curves which taken alone or in combination often describe many meteorological flow patterns. The predictor-only method always under-estimated streamline curvature. The predictor-corrector method overestimated curvature where the stop increment was directed toward increasing curvature and underestimated curvature where the step increment was directed toward decreasing curvature. This led to at least a partial compensation which reduced the cumulative error.

Full access
Gary L. Achtemeier

Abstract

The use of objectively analysed fields of meteorological data for complex diagnostic studies and for the initialization of numerical prediction models places the requirements upon the objective method that derivatives of the grided fields be accurate and free from interpolation error. A modification of an objective analysis developed by Barnes provides improvements in analyses of both the field and its derivatives. Theoretical comparisons, between analyses of analytical monochromatic warm and comparisons between analyses of actual weather data are used to show the potential of the new method. The new method restores more of the amplitudes of desired wavelengths while simultaneously filtering more of the amplitudes of undesired wavelengths. These results also hold for the fire and mend derivatives calculated from the gridded fields. Greatest improvements were for the Laplacians of the height field; the new method reduced the variance of undesirable very short wavelengths by 72 percent. Other were found in the divergence of the gridded wind field and near the boundaries of the field of data.

Full access
Gary L. Achtemeier

Abstract

Operational and experimental convective cloud-seeding projects are often planned without regard to the number of seeding-opportunity days that can be lost because of the need to suspend operations during the threat of severe weather. June daily rainfall, severe storm and tornado watches, and observed tornadoes within a hypothetical (proposed) operational area over southwest Kansas were compared within the context of five procedures for severe weather related operations suspensions. These procedures varied in the restrictions placed on operations. The results show that anywhere from 45–87% of the June rain can fall when operations have been suspended. The length of a scientific seeding experiment could be increased anywhere from 45–426%. Finally, 46% of the tornadoes occurred when there were no concurrent tornado watches. This failure rate is so large that severe weather watches may not be useful for operations suspension procedures.

Full access
Gary L. Achtemeier

Abstract

Primitive equation initialization, applicable to numerical prediction models, has been studied by using principles of variational calculus and the appropriate physical equations: two horizontal momentum, adiabatic energy, continuity, and hydrostatic. Essentially independent observations of the vector wind, geopotential height, and specific volume (though pressure and temperature), weighted according to their relative accuracies of measurement, are introduced into an adjustment functional in a least squares sense. The first variation on the functional is required to vanish and the four-dimensional Euler-Lagrange equations are solved as an elliptic boundary value problem.

The coarse 12 h observation frequency creates special problems since the variational formulation is explicit in time. A method accommodating the synoptic time scale was found whereby tendencies could be adjusted by requiring exact satisfaction of the continuity equation. This required a subsidiary variational formulation, the solution of which led to more rapid convergence to overall variational balance.

A test of the variational method using real data required satisfaction of three criteria: agreement with dynamic constraints, minimum adjustment from observed fields, and realistic patterns as could be judged from subjective pattern recognition. It was found that the criteria were satisfied and variationally optimized estimates for all dependent variables including vertical velocity were obtained.

Full access