Search Results

You are looking at 1 - 10 of 73 items for

  • Author or Editor: Eugenia Kalnay x
  • Refine by Access: All Content x
Clear All Modify Search
Eugenia Kálnay-Rivas

Abstract

Although.there is some ambiguity in the description of the U.S. Navy Fleet fourth-order primitive-equation model developed by Mihok and Kaitala (1976), the finite differences used for the continuity equation and pressure gradient term appear to contain second-order errors comparable to those of the original second-order model, and larger fourth-order errors. In the thermodynamics, moisture and momentum equations, there is partial cancellation of second-order errors, leading to a better approximation of the phase speed. However, in regions with strong horizontal variations of wind, the second-order errors in these equations are serious. These errors are due to the neglect of the truncation errors introduced by horizontal averaging in the staggered grid.

Full access
Eugenia Kálnay de Rivas

Abstract

No abstract available.

Full access
Eugenia Kálnay-Rivas

Abstract

The “box-type” finite-difference method includes a weighted average of the pressure gradient with weights proportional to the surface of the grid walls. It is shown that this averaging introduces first-order truncation errors near the poles. An example is shown in which the relative error is of zero order and the scheme produces large distortions in the solution at high latitudes.

Full access
Zoltan Toth and Eugenia Kalnay

On 7 December 1992, The National Meteorological Center (NMC) started operational ensemble forecasting. The ensemble forecast configuration implemented provides 14 independent forecasts every day verifying on days 1–10. In this paper we briefly review existing methods for creating perturbations for ensemble forecasting. We point out that a regular analysis cycle is a “breeding ground” for fast-growing modes. Based on this observation, we devise a simple and inexpensive method to generate growing modes of the atmosphere.

The new method, “breeding of growing modes,” or BGM, consists of one additional, perturbed short-range forecast, introduced on top of the regular analysis in an analysis cycle. The difference between the control and perturbed six-hour (first guess) forecast is scaled back to the size of the initial perturbation and then reintroduced onto the new atmospheric analysis. Thus, the perturbation evolves along with the time-dependent analysis fields, ensuring that after a few days of cycling the perturbation field consists of a superposition of fast-growing modes corresponding to the contemporaneous atmosphere, akin to local Lyapunov vectors.

The breeding cycle has been designed to model how the growing errors are “bred” and maintained in a conventional analysis cycle through the successive use of short-range forecasts. The bred modes should thus offer a good estimate of possible growing error fields in the analysis. Results from extensive experiments indicate that ensembles of just two BGM forecasts achieve better results than much larger random Monte Carlo or lagged average forecast (LAF) ensembles. Therefore, the operational ensemble configuration at NMC is based on the BGM method to generate efficient initial perturbations.

The only two methods explicitly designed to generate perturbations that contain fast-growing modes corresponding to the evolving atmosphere are the BGM and the method of Lorenz, which is based on the singular modes of the linear tangent model. This method has been adopted operationally at The European Centre for Medium-Range Forecasts (ECMWF) for ensemble forecasting. Both the BGM and the ECMWF methods seem promising, but since it has not yet been possible to compare in detail their operational performance we limit ourselves to pointing out some of their similarities and differences.

Full access
Zoltan Toth and Eugenia Kalnay

Abstract

The breeding method has been used to generate perturbations for ensemble forecasting at the National Centers for Environmental Prediction (formerly known as the National Meteorological Center) since December 1992. At that time a single breeding cycle with a pair of bred forecasts was implemented. In March 1994, the ensemble was expanded to seven independent breeding cycles on the Cray C90 supercomputer, and the forecasts were extended to 16 days. This provides 17 independent global forecasts valid for two weeks every day.

For efficient ensemble forecasting, the initial perturbations to the control analysis should adequately sample the space of possible analysis errors. It is shown that the analysis cycle is like a breeding cycle: it acts as a nonlinear perturbation model upon the evolution of the real atmosphere. The perturbation (i.e., the analysis error), carried forward in the first-guess forecasts, is “scaled down” at regular intervals by the use of observations. Because of this, growing errors associated with the evolving state of the atmosphere develop within the analysis cycle and dominate subsequent forecast error growth.

The breeding method simulates the development of growing errors in the analysis cycle. A difference field between two nonlinear forecasts is carried forward (and scaled down at regular intervals) upon the evolving atmospheric analysis fields. By construction, the bred vectors are superpositions of the leading local (time-dependent) Lyapunov vectors (LLVs) of the atmosphere. An important property is that all random perturbations assume the structure of the leading LLVs after a transient period, which for large-scale atmospheric processes is about 3 days. When several independent breeding cycles are performed, the phases and amplitudes of individual (and regional) leading LLVs are random, which ensures quasi-orthogonality among the global bred vectors from independent breeding cycles.

Experimental runs with a 10-member ensemble (five independent breeding cycles) show that the ensemble mean is superior to an optimally smoothed control and to randomly generated ensemble forecasts, and compares favorably with the medium-range double horizontal resolution control. Moreover, a potentially useful relationship between ensemble spread and forecast error is also found both in the spatial and time domain. The improvement in skill of 0.04–0.11 in pattern anomaly correlation for forecasts at and beyond 7 days, together with the potential for estimation of the skill, indicate that this system is a useful operational forecast tool.

The two methods used so far to produce operational ensemble forecasts—that is, breeding and the adjoint (or “optimal perturbations”) technique applied at the European Centre for Medium-Range Weather Forecasts—have several significant differences, but they both attempt to estimate the subspace of fast growing perturbations. The bred vectors provide estimates of fastest sustainable growth and thus represent probable growing analysis errors. The optimal perturbations, on the other hand, estimate vectors with fastest transient growth in the future. A practical difference between the two methods for ensemble forecasting is that breeding is simpler and less expensive than the adjoint technique.

Full access
Eugenia Kalnay and Roy Jenne
Full access
Eugenia Kálnay de Rivas

Abstract

The results of two-dimensional simulations of the deep circulation of Venus are presented. They prove that the high surface temperature can only be explained by the greenhouse effect, and that Goody and Robinson's dynamical model is not valid. Very long time integrations, up to a time comparable with the radiative relaxation time, confirm these results. Analytical radiative equilibrium solutions for a semi-grey atmosphere, both with and without an internal heat source, are presented. It is shown that the green-house effect is sufficient to produce the high surface temperature if τT * ≫ 100 and S = τS *T * ≲ 0.005. This result is still valid in the presence of an internal heat source of intensity compatible with observations.

A two-dimensional version of a three-dimensional model is used to test the validity of the new mechanism proposed by Gierasch to explain the 4-day circulation. Numerical experiments with horizontal viscosities vH = 1011 – 1012 cm2 s−1 failed to show strong zonal velocities even for the case of large Prandtl numbers. It is observed that the dissipation of angular momentum introduced by the strong horizontal diffusion more than compensates for the upward transport of angular momentum due to the Hadley cell.

Preliminary three-dimensional calculations show a tendency to develop strong small-scale circulations.

Full access
Takuma Yoshida and Eugenia Kalnay

Abstract

Strongly coupled data assimilation (SCDA), where observations of one component of a coupled model are allowed to directly impact the analysis of other components, sometimes fails to improve the analysis accuracy with an ensemble Kalman filter (EnKF) as compared with weakly coupled data assimilation (WCDA). It is well known that an observation’s area of influence should be localized in EnKFs since the assimilation of distant observations often degrades the analysis because of spurious correlations. This study derives a method to estimate the reduction of the analysis error variance by using estimates of the cross covariances between the background errors of the state variables in an idealized situation. It is shown that the reduction of analysis error variance is proportional to the squared background error correlation between the analyzed and observed variables. From this, the authors propose an offline method to systematically select which observations should be assimilated into which model state variable by cutting off the assimilation of observations when the squared background error correlation between the observed and analyzed variables is small. The proposed method is tested with the local ensemble transform Kalman filter (LETKF) and a nine-variable coupled model, in which three Lorenz models with different time scales are coupled with each other. The covariance localization with the correlation-cutoff method achieves an analysis more accurate than either the full SCDA or the WCDA methods, especially with smaller ensemble sizes.

Full access
Ming Cai and Eugenia Kalnay

Abstract

This paper shows analytically that a reanalysis made with a frozen model can detect the warming trend due to an increase of greenhouse gases within the atmosphere at its full strength (at least 95% level) after a short transient (less than 100 analysis cycles). The analytical proof is obtained by taking into consideration the following three possible deficiencies in the model used to create first-guess fields: (i) the physical processes responsible for the observed trend (e.g., an increase of greenhouse gases) are completely absent from the model, (ii) the first-guess fields are affected by an initial drift caused by the imbalance between the model equilibrium and the analysis that contains trends due to the observations, and (iii) the model used in the reanalysis has a constant model bias. The imbalance contributes to a systematic reduction in the reanalysis trend compared to the observations. The analytic derivation herein shows that this systematic reduction can be very small (less than 5%) when the observations are available for twice-daily assimilation. Moreover, the frequent analysis cycle is essential to compensate for the impact due to relatively poor space coverage of the observational network, which effectively yields smaller weights assigned to observations in a global data assimilation system.

Other major issues about using reanalysis for a long-term trend analysis, particularly the impact of the major changes in the global observing system that took place in the 1950s and in 1979, are not addressed. Here it is merely proven mathematically that using a frozen model in a reanalysis does not cause significant harm to the fidelity of the long-term trend in the reanalysis.

Full access
Eugenia Kalnay and Amnon Dalcher

Abstract

We have shown that it is possible to predict the skill of numerical weather forecasts—a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite data impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems.

When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when we used regional verifications, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.

Although the period covered in this study is only one month long, it includes cases with wide variation of skill in each of the four regions considered. The method could be tested in an operational context using ensembles of lagged forecasts and longer time periods in order to test its applicability to different arms and weather regimes.

Full access