Search Results

You are looking at 1 - 10 of 11 items for :

  • Author or Editor: FEDOR MESINGER x
  • Refine by Access: All Content x
Clear All Modify Search
Fedor Mesinger

Abstract

The problem of the forced adjustment of the wind field to the height field is experimentally studied with the Mintz-Arakawa two-level atmospheric general circulation model.

In all but one of the experiments, the height field was assumed to be perfectly observed at 6-hr intervals, over a time period of one day or less, and from this height data the vector wind field was computed by forced dynamical adjustment. In one experiment, the temperature alone was prescribed. The winds computed in these experiments were compared with the “control” winds of the general circulation simulation.

The best agreement between the computed and the control winds was obtained when the time-differencing scheme in the governing finite-difference equations of motion had a large rate of damping of high-frequency motions. This damping rate also determined the optimum fraction and frequency of restoration of the height (or temperature) fields. With strong damping, total restoration every time step gave the most rapid rate of wind error reduction and the smallest asymptotic limit of the wind error.

The information content of the height field and its time derivatives was analysed. The first time derivative of the height field was of much greater importance than the next higher time derivatives. In middle latitudes, where the time variation of the height field was large, the first time derivative reduced the computed wind error to about half of the error when using no time derivative. When the information is limited to 24 hr or less, the total height field information (surface pressure as well as temperature) produced a much smaller wind error than temperature information alone.

With the first time derivative of the height field, the asymptotic limit of the computed wind error was about 1–1.5 m sec−1 in middle latitudes and about 2.5 m sec−1 in the tropics.

Full access
Fedor Mesinger

Abstract

A numerical experiment is performed with the purpose of investigating the behavior of the trajectories of a very large number of constant-volume particles. Practical significance is given to this problem by the possibility of using superpressured, constant-volume balloons for routine upper air observations.

The computational scheme of the experiment is described with emphasis on some aspects of trajectory computations. Thirty-day diagnostic trajectories are computed for two levels using the total velocity components, and for one level using only the nondivergent ones, and the resulting spacing of the trajectory points is discussed. Theory of the distances to the nearest among a large number of points is developed and applied for the statistical description of the results. Histograms of distances from the constant-volume particles as well as from random points in space to the nearest neighboring constant-volume particle are computed, and compared with the frequency function of those distances for the case of a random distribution of particles. It is shown that the nondivergent part of the atmospheric motions gives rise to a random distribution of initially regularly spaced particles. Departures from the random distribution are therefore produced by the divergent part of the atmospheric motions. In the experiment they resulted in the increase in distances from random points in space to the nearest constant-volume particle of about 12 and 4 per cent, at the levels corresponding to 800 and 300 mb, respectively. The computational region was approximately equal to the area north of 15N; a somewhat larger effect of the divergent part of the wind should be expected in the case of the global constant-volume trajectories.

Full access
Fedor Mesinger

Abstract

For use in a model on the semi-staggered E (in the Arakawa notation) grid, a number of conserving schemes for the horizontal advection are developed and analyzed. For the rotation terms of the momentum advection, the second-order enstrophy and energy-conserving scheme of Janjić (1977) is generalized to conserve energy in case of divergent flow. A family of analogs of the Arakawa (1966) fourth-order scheme is obtained following a transformation of its component Jacobians. For the kinetic energy advection terms, a fourth- (or approximately fourth) order scheme is developed which maintains the total kinetic energy and, in addition, makes no contribution to the change in the finite-difference vorticity. For the resulting both second- and fourth-order momentum advection scheme, a modification is pointed out which avoids the non-cancellation of terms considered recently by Hollingsworth and Källberg (1979), and shown to lead to a linear instability of a zonally uniform inertia-gravity wave. Finally, a second- order as well as a fourth-order (or approximately so) advection scheme for temperature (and moisture) advection is given, preserving the total energy (and moisture) inside the integration region.

Full access
FEDOR MESINGER

Abstract

A Lagrangean-type numerical forecasting method is developed in which the computational (grid) points are advected by the wind and the necessary space derivatives (in the pressure gradient terms, for example) are computed using the values of the variables at all the computation points that at the particular moment are within a prescribed distance of the point for which the computation is done. In this way, the forecasting problem reduces to solving the ordinary differential equations of motion and thermodynamics for each computation point, instead of solving the partial differential equations in the Eulerian or classical Lagrangean way. The method has some advantages over the conventional Eulerian scheme: simplicity (there are no advection terms), lack of computational dispersion in the advection terms and therefore better simulation of atmospheric advection and deformation effects, very little inconvenience due to the spherical shape of the earth, and the possibility for a variable space resolution if desired. On the other hand, some artificial smoothing may be necessary, and it may be difficult (or impossible) to conserve the global integrals of certain quantities.

A more detailed discussion of the differencing scheme used for the time integration is included in a separate section, This is the scheme obtained by linear extrapolation of computed time derivatives to a time value of t 0 + aΔt where t 0 is the value of time at the beginning of the considered time step Δt and where a is a parameter that can be used to control the properties of the scheme. When choosing a value of a between ½ and 1, a scheme is obtained that damps the high-frequency motions, in a similar way as the Matsuno scheme, but needs somewhat less computer time and, with the same damping intensity, has a higher accuracy for low-frequency meteorologically significant motions.

Using the described method, a 4-day experimental forecast has been made, starting with a stationary Haurwitz-Neamtan solution, for a primitive equation, global, and homogeneous model. The final geopotential height map showed no visible phase errors and only a modest accumulation of truncation errors and effects of numerical smoothing mechanisms. Two shorter experiments have also been made to analyze the effects of space resolution and damping in the process of time differencing. It is felt that the experimental results strongly encourage further testing and investigation of the proposed method.

Full access
Fedor Mesinger
and
Nedeljka Mesinger

Abstract

An earlier attempt to estimate the effect of hail suppression by silver iodide seeding in eastern parts of Yugoslavia, based on hail-frequency data at stations having professional observers, is extended here. Hail-frequency data only are considered, rather than the hail- and the ice pellet-frequency data taken together. The period of the data is extended from 37 to 40 years. A statistical analysis of the probability of the observed result being obtained by chance is made, based on the permutation test; a sensitivity test of the possible observer-subjectivity effect is done; and several tests of and corrections for any climate and observing practices change are also made.

The ratio of the average hail frequency during the seeding activities in the area of the station and the average frequency before these activities shows a reduction in the hail frequency by about 25%. A synthetic histogram of the frequency ratios resulting from 10 000 random permutations (station by station) of the observed frequency data gave the probability of this observed frequency reduction being obtained by chance, if in fact no positive effect of seeding or climate change existed, of about 2 in 10 000.

A sensitivity test of the observer-subjectivity effect was made by removing from the available sample of 23 stations the station showing the greatest reduction in hail frequency. This decreased the apparent effectiveness from about 25% to about 23%, and the probability of the positive result became 4 in 10 000.

Tests as well as corrections for the effects of possible climate fluctuations and/or a change in hail-observing practices were performed by using the two neighboring regions of Vojvodina and Bosnia and Herzegovina, which had no hail suppression programs as the control area. The effectiveness calculations as well as the permutation tests were than repeated using “corrected” data. These various corrections reduced the effectiveness of the seeding activities from about 25% to between 22% and 15% and increased the probability of the positive result being obtained by chance to between about 6 and 141 in 10 000. Thus, it appears unlikely that the seeding activities have no positive effect whatsoever; and the reduction in hail frequency seems to be of the order of 15%–20%.

Full access
Dušanka Županski
and
Fedor Mesinger

Abstract

The benefits of assimilation of precipitation data had been demonstrated in diabetic initialization and nudging-type experiments some years ago. In four-dimensional variational (4DVAR) data assimilation, however, the precipitation data have not yet been used. To correctly assimilate the precipitation data by the 4DVAR technique, the problems related to the first-order discontinuities in the “full-physics” forecast model should be solved first. To address this problem in the full-physics regional NMC eta forecast model, a modified, more continuous version of the Beta-Miller cumulus convection scheme is defined and examined as a possible solution.

The 4DVAR data assimilation experiments ate performed using the conventional data (in this case, analyses of T, ps , u, v, and q) and the precipitation data (the analysis of 24-h accumulated precipitation). The full-physics NMC eta model and the adjoint model with convective processes are used in the experiments. The control variable of the minimization problem is defined to include the initial conditions and model's systematic error parameter. An extreme synoptic situation from June 1993, with strong effects of precipitation over the United States is chosen for the experiments. The results of the 4DVAR experiments show convergence of the minimization process within 10 iterations and an improvement of the precipitation forecast, during and after the data assimilation period, when using the modified cumulus convection scheme and the precipitation data. In particular, the 4DVAR method outperforms the optimal interpolation method by improving the precipitation forecast.

Full access
Fedor Mesinger
and
Russell E. Treadon

Abstract

It is suggested that there are two major problems with the “standard” methods of reducing pressure to sea level based on the surface temperature or the lowest-layer(s) temperature of a numerical model. The first is that using air temperatures above elevated terrain for reducing pressure to sea level is in conflict with the presumed objective of the reduction. The authors take this to be the derivation of a pressure field appropriate to sea level that to the extent possible maintains the shape of the constant-elevation isobars and reflects the changes in the horizontal of the magnitudes of horizontal pressure gradients, as these exist at the ground surface. The other problem is that evidence is emerging showing that with the increasing realism in the representation of mountains in numerical models the performance of the standard reduction methods is about to deteriorate to the point of becoming unacceptable.

Fortunately, as proposed earlier by the first author, an alternative exists that is both simple and consistent with the objective of the reduction as presumed above. It is to replace the downward extrapolation of temperature by the horizontal interpolation of (virtual) temperature where the temperatures are given at the sides of mountains.

Performance of the “horizontal” reduction method is here compared against the so-called Shuell method, which is a conventional part of the U.S. National Meteorological Center's postprocessing packages. This is done by examining the sea level pressure centers of initial conditions and forecasts, at 12-h intervals, of the National Meteorological Center's eta model, as obtained via the Shuell and horizontal reduction methods. The comparison is done for a sample of late summer initial conditions and forecasts verifying at 16 consecutive 0000 and 1200 UTC initial times. Note that the Shuell reduction method was specifically designed to improve upon a standard lapse rate reduction to sea level during the warm season.

In terms of the agreement with the analyst-assessed values, the two methods showed an overall comparable performance. The horizontal reduction method performed much better for Mexican heat lows, while the Shuell method was clearly superior in reproducing the analyzed values at high centers over the United States and Canadian highlands. The horizontal reduction method performed somewhat better in depicting the values at the centers of lows over the United States and Canadian mountainous region of the study. As its main benefit, the horizontal reduction method eliminated formidable noise and artifact problems of the Shuell reduction method without resorting to smoothing devices.

Full access
Yongkang Xue
,
Ratko Vasic
,
Zavisa Janjic
,
Fedor Mesinger
, and
Kenneth E. Mitchell

Abstract

This study investigates the capability of the dynamic downscaling method (DDM) in a North American regional climate study using the Eta/Simplified Simple Biosphere (SSiB) Regional Climate Model (RCM). The main objective is to understand whether the Eta/SSiB RCM is capable of simulating North American regional climate features, mainly precipitation, at different scales under imposed boundary conditions. The summer of 1998 was selected for this study and the summers of 1993 and 1995 were used to confirm the 1998 results. The observed precipitation, NCEP–NCAR Global Reanalysis (NNGR), and North American Regional Reanalysis (NARR) were used for evaluation of the model’s simulations and/or as lateral boundary conditions (LBCs). A spectral analysis was applied to quantitatively examine the RCM’s downscaling ability at different scales.

The simulations indicated that choice of domain size, LBCs, and grid spacing were crucial for the DDM. Several tests with different domain sizes indicated that the model in the North American climate simulation was particularly sensitive to its southern boundary position because of the importance of moisture transport by the southerly low-level jet (LLJ) in summer precipitation. Among these tests, only the RCM with 32-km resolution and NNGR LBC or with 80-km resolution and NARR LBC, in conjunction with appropriate domain sizes, was able to properly simulate precipitation and other atmospheric variables—especially humidity over the southeastern United States—during all three summer months—and produce a better spectral power distribution than that associated with the imposed LBC (for the 32-km case) and retain spectral power for large wavelengths (for the 80-km case). The analysis suggests that there might be strong atmospheric components of high-frequency variability over the Gulf of Mexico and the southeastern United States.

Full access
Fedor Mesinger
,
Zaviša I. Janjić
,
Slobodan Ničković
,
Dušanka Gavrilov
, and
Dennis G. Deaven

Abstract

The problem of the pressure gradient force error in the case of the terrain-following (sigma) coordinate does not appear to have a solution. The problem is not one of truncation error in the calculation of space derivatives involved. Thus, with temperature profiles resulting in large errors, an increase in vertical resolution may not reduce and is even likely to increase the error. Therefore, an approach abandoning the sigma system has been proposed. It involves the use of “step” mountains with coordinate surfaces prescribed to remain at fixed elevations at places where they touch (and define) or intersect the ground surface. Thus, the coordinate surfaces are quasi-horizontal, and the sigma system problem is not present. At the same time, the simplicity of the sigma system is maintained.

In this paper, design of the model (“silhouette” averaged) mountains, properties of the wall boundary condition, and the scheme for calculation of the potential to kinetic energy conversion are presented. For an advection scheme achieving a strict control of the nonlinear energy cascade on the semistaggered grid, it is demonstrated that a straightforward no-slip wall boundary condition maintains conservation properties of the scheme with no vertical walls, which are important from the point of view of the control of this energy cascade from large to small scales. However, with that simple boundary condition considered, momentum is not conserved. The scheme conserving energy in conversion between the potential and kinetic energy, given earlier for the one-dimensional case, is extended to two dimensions.

Results of real data experiments are described, testing the performance of the resulting “Step-mountain” model. An attractive feature of a step-mountain (“eta”) model is that it can easily be run as a sigma system model, the only difference being the definition of ground surface grid point values of the vertical coordinate. This permits a comparison of the sigma and the eta formulations. Two experiments of this kind have been made, with a model version including realistic steep mountains (steps at 290, 1112 and 2433 m). They have both revealed a substantial amount of noise resulting from the sigma, as compared to the eta, formulation. One of these experiments, especially with the step mountains, gave a rather successful simulation of the perhaps difficult “historic” Buzzi–Tibaldi case of Genoa lee cyclogenesis. A parallel experiment showed that, starting with the same initial data, one obtains no cyclogenesis without mountains. Still, the mountains experiment did simulate the accompanying midtropospheric cutoff, a phenomenon that apparently has not been reproduced in previous simulations of mountain-induced Genoa lee cyclogeneses.

For a North American limited area region, experimental step-mountain simulations were performed for a case of March 1984, involving development of a secondary storm southeast of the Appalachians. Neither the then operational U.S. National Meteorological Center's Limited Area Forecast Model (LFM) nor the recently introduced Nested Grid Model (NGM) were successful in simulating the redevelopment. On the other hand, the step-mountain model, with a space resolution set up to mimic that of NGM, successfully simulated the ridging that indicates the redevelopment.

Full access
Fedor Mesinger
,
Thomas L. Black
,
David W. Plummer
, and
John H. Ward

Abstract

A step-mountain (eta) coordinate limited-area model is being developed at the National Meteorological Center (NMC) to improve forecasts of severe weather and other mesoscale phenomena. Precipitation forecasts are reviewed for the 20-day period 16 June–5 July 1989. This period was chosen not only because of intense warm-season precipitation, including that of Tropical Storm Allison, but also because two sets of forecasts from NMC's nested grid model (NGM) were available for comparison, one using the operational Kuo convection and the other using the eta model's Betts-Miller convection scheme. Thus, a three-way model comparison was possible.

Particular attention is paid to the forecasts of precipitation maxima. With verification performed on accumulated 24-h amounts averaged over the limited fine mesh (LFM) model grid boxes, the eta model shows skill at the highest observed precipitation category in 14 out of 58 verification periods, about one fourth of all cases. The forecasts also show a high degree of consistency in that successful forecasts starting from different initial times are produced for the same verification period.

Although the eta model was less successful than the NGM in predicting the lightest precipitation category, it demonstrated noted improvement in the 0.50-inch and greater categories, regardless of the convection scheme used in the NGM. Evidence is presented which indicates that the greater accuracy of the eta model is primarily a result of its space differencing schemes.

Full access