Search Results

You are looking at 1 - 10 of 73 items for

  • Author or Editor: Eugenia Kalnay x
  • Refine by Access: All Content x
Clear All Modify Search
Eugenia Kálnay-Rivas

Abstract

Although.there is some ambiguity in the description of the U.S. Navy Fleet fourth-order primitive-equation model developed by Mihok and Kaitala (1976), the finite differences used for the continuity equation and pressure gradient term appear to contain second-order errors comparable to those of the original second-order model, and larger fourth-order errors. In the thermodynamics, moisture and momentum equations, there is partial cancellation of second-order errors, leading to a better approximation of the phase speed. However, in regions with strong horizontal variations of wind, the second-order errors in these equations are serious. These errors are due to the neglect of the truncation errors introduced by horizontal averaging in the staggered grid.

Full access
Eugenia Kálnay-Rivas

Abstract

The “box-type” finite-difference method includes a weighted average of the pressure gradient with weights proportional to the surface of the grid walls. It is shown that this averaging introduces first-order truncation errors near the poles. An example is shown in which the relative error is of zero order and the scheme produces large distortions in the solution at high latitudes.

Full access
Zoltan Toth
and
Eugenia Kalnay

Abstract

The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In March 1994, the ensemble was expanded to seven independent breeding cycles on the Cray C90 supercomputer, and the forecasts were extended to 16 days. This provides 17 independent global forecasts valid for two weeks every day.

For efficient ensemble forecasting, the initial perturbations to the control analysis should adequately sample the space of possible analysis errors. It is shown that the analysis cycle is like a breeding cycle: it acts as a nonlinear perturbation model upon the evolution of the real atmosphere. The perturbation (i.e., the analysis error), carried forward in the first-guess forecasts, is “scaled down” at regular intervals by the use of observations. Because of this, growing errors associated with the evolving state of the atmosphere develop within the analysis cycle and dominate subsequent forecast error growth.

The breeding method simulates the development of growing errors in the analysis cycle. A difference field between two nonlinear forecasts is carried forward (and scaled down at regular intervals) upon the evolving atmospheric analysis fields. By construction, the bred vectors are superpositions of the leading local (time-dependent) Lyapunov vectors (LLVs) of the atmosphere. An important property is that all random perturbations assume the structure of the leading LLVs after a transient period, which for large-scale atmospheric processes is about 3 days. When several independent breeding cycles are performed, the phases and amplitudes of individual (and regional) leading LLVs are random, which ensures quasi-orthogonality among the global bred vectors from independent breeding cycles.

Experimental runs with a 10-member ensemble (five independent breeding cycles) show that the ensemble mean is superior to an optimally smoothed control and to randomly generated ensemble forecasts, and compares favorably with the medium-range double horizontal resolution control. Moreover, a potentially useful relationship between ensemble spread and forecast error is also found both in the spatial and time domain. The improvement in skill of 0.04–0.11 in pattern anomaly correlation for forecasts at and beyond 7 days, together with the potential for estimation of the skill, indicate that this system is a useful operational forecast tool.

The two methods used so far to produce operational ensemble forecasts—that is, breeding and the adjoint (or “optimal perturbations”) technique applied at the European Centre for Medium-Range Weather Forecasts—have several significant differences, but they both attempt to estimate the subspace of fast growing perturbations. The bred vectors provide estimates of fastest sustainable growth and thus represent probable growing analysis errors. The optimal perturbations, on the other hand, estimate vectors with fastest transient growth in the future. A practical difference between the two methods for ensemble forecasting is that breeding is simpler and less expensive than the adjoint technique.

Full access
Eugenia Kalnay
and
Masao Kanamitsu

Abstract

In atmospheric models that include vertical diffusion and surface fluxes of heat and moisture it is common to observe large amplitude “fibrillations” associated with these noniinear damping terms. In this paper this phenomenon is studied through the analysis of a simple nonlinear damping equation, ∂X/∂t = −(KXP )X + S. It is concluded that the behavior of several time schemes for the strongly nonlinear damping equations currently used can be quite pathological, with either large amplitude oscillations, or even nonoscillatory but incorrect solutions. Also presented are new simple schemes, which are easy to implement and have a much wider range of stability. These schemes are applied in the new National Meteorological Center (NMC) spectral model.

Full access
Eugenia Kalnay
and
Amnon Dalcher

Abstract

We have shown that it is possible to predict the skill of numerical weather forecasts—a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite data impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems.

When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when we used regional verifications, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.

Although the period covered in this study is only one month long, it includes cases with wide variation of skill in each of the four regions considered. The method could be tested in an operational context using ensembles of lagged forecasts and longer time periods in order to test its applicability to different arms and weather regimes.

Full access
Eugenia Kálnay de Rivas

Abstract

The results of two-dimensional simulations of the deep circulation of Venus are presented. They prove that the high surface temperature can only be explained by the greenhouse effect, and that Goody and Robinson's dynamical model is not valid. Very long time integrations, up to a time comparable with the radiative relaxation time, confirm these results. Analytical radiative equilibrium solutions for a semi-grey atmosphere, both with and without an internal heat source, are presented. It is shown that the green-house effect is sufficient to produce the high surface temperature if τ T * ≫ 100 and S = τ S * T * ≲ 0.005. This result is still valid in the presence of an internal heat source of intensity compatible with observations.

A two-dimensional version of a three-dimensional model is used to test the validity of the new mechanism proposed by Gierasch to explain the 4-day circulation. Numerical experiments with horizontal viscosities vH = 1011 – 1012 cm2 s−1 failed to show strong zonal velocities even for the case of large Prandtl numbers. It is observed that the dissipation of angular momentum introduced by the strong horizontal diffusion more than compensates for the upward transport of angular momentum due to the Hadley cell.

Preliminary three-dimensional calculations show a tendency to develop strong small-scale circulations.

Full access
Eugenia Kálnay de Rivas

Abstract

No abstract available.

Full access
Eugenia Kálnay De Rivas

Abstract

The deep circulation of the atmosphere of Venus is simulated by means of two-dimensional numerical models. Two extreme cases are considered: first, rotation is neglected and the subsolar point is assumed to be fixed; second (and more realistically), the solar heating is averaged over a Venus solar day and rotation is included. For each case a Boussinesq model, in which density variations are neglected except when coupled with gravity, and a quasi-Boussinesq model which includes a basic stratification of density and a semi-gray treatment of radiation, are developed. The results obtained with the Boussinesq models are similar to those obtained by Goody and Robinson and by Stone. However, when the stratification of density is included and most of the solar radiation is absorbed near the top, the large-scale circulation is confined to the upper layers of the atmosphere during the 4×107 sec of simulated time. We cannot be sure that on a much longer time scale (109 sec) the circulation will not penetrate the interior, but our results suggest that radiation will tend to make the lower atmosphere highly stable. When solar radiation is allowed to penetrate the atmosphere, so that at the equator 6% of the incoming solar radiation reaches the surface, then the combination of a more deeply driven circulation and a partial greenhouse effect is able to maintain an adiabatic stratification.

The effect of symmetrical solar heating is to produce direct Hadley cells in each hemisphere with small reverse cells near the poles. Poleward angular momentum transport in the upper atmosphere produces a shear in the zonal motion with a maximum retrograde velocity of the order of 10 m sec−1 at the top of the atmosphere.

The numerical integrations were performed using non-uniform grids to allow adequate resolution of the boundary layers.

Full access
Ming Cai
and
Eugenia Kalnay

Abstract

This paper shows analytically that a reanalysis made with a frozen model can detect the warming trend due to an increase of greenhouse gases within the atmosphere at its full strength (at least 95% level) after a short transient (less than 100 analysis cycles). The analytical proof is obtained by taking into consideration the following three possible deficiencies in the model used to create first-guess fields: (i) the physical processes responsible for the observed trend (e.g., an increase of greenhouse gases) are completely absent from the model, (ii) the first-guess fields are affected by an initial drift caused by the imbalance between the model equilibrium and the analysis that contains trends due to the observations, and (iii) the model used in the reanalysis has a constant model bias. The imbalance contributes to a systematic reduction in the reanalysis trend compared to the observations. The analytic derivation herein shows that this systematic reduction can be very small (less than 5%) when the observations are available for twice-daily assimilation. Moreover, the frequent analysis cycle is essential to compensate for the impact due to relatively poor space coverage of the observational network, which effectively yields smaller weights assigned to observations in a global data assimilation system.

Other major issues about using reanalysis for a long-term trend analysis, particularly the impact of the major changes in the global observing system that took place in the 1950s and in 1979, are not addressed. Here it is merely proven mathematically that using a frozen model in a reanalysis does not cause significant harm to the fidelity of the long-term trend in the reanalysis.

Full access
Eugenia Kalnay
and
Roy Jenne
Full access