Search Results

You are looking at 1 - 10 of 52 items for

  • Author or Editor: Ross N. Hoffman x
  • Refine by Access: All Content x
Clear All Modify Search
Ross N. Hoffman

Abstract

The simulated climates of highly truncated nonlinear models based on the primitive equations (PE), balance equations (BE) and quasi-geostrophic (QG) equations are compared, in order to determine the effects of the filtering approximations. The models and numerical procedures are identical in all possible respects. At low forcing the QG and PE climates agree in most respects. At high forcing the QG model gives only a qualitatively correct simulation of the PE mean state and energy cycle. In contrast, the BE model is relatively successful at simulating, the PE climate.

Two attempts are made to get better simulations of the PE climate within the QG framework—the tuned and perturbed QG models. The tuned QG model is better than the untuned version at simulating the PE time mean model state, but the simulated energy cycle is not improved at all. In the perturbed QG model randomly generated perturbations, designed so that their statistics are similar to the statistics of the observed prediction errors, are added to the model state at regular intervals. The perturbed QG model is nearly as successful as the BE model at simulating the PE climate.

Full access
Ross N. Hoffman

The earth's atmosphere may be chaotic and very likely is sensitive to small perturbations. Certainly, very simple nonlinear dynamical models of the atmosphere are chaotic, and the most realistic numerical weather prediction models are very sensitive to initial conditions. Chaos implies that there is a finite predictability time limit no matter how well the atmosphere is observed and modeled. Extreme sensitivity to initial conditions suggests that small perturbations to the atmosphere may effectively control the evolution of the atmosphere, if the atmosphere is observed and modeled sufficiently well.

The architecture of a system to control the global atmosphere and the components of such a system are described. A feedback control system similar to many used in industrial settings is envisioned. Although the weather controller is extremely complex, the existence of the required technology is plausible in the time range of several decades.

While the concept of controlling the weather has often appeared in science fiction literature, this statement of the problem provides a scientific basis and a system architecture to actually implement global weather control. Large-scale weather control raises important legal and ethical questions. The nation that controls its own weather will perforce control the weather of other nations. Weather “wars” are conceivable. An international treaty may be required, limiting the use of weather control technology.

Full access
Ross N. Hoffman

Abstract

For a discretized deterministic model of the atmosphere, a single point in the model's phase space defines a complete trajectory. It is possible to choose a point which minimizes the differences between the model trajectory starting at the chosen point and all data observed during an analysis period (−Tt≤0). In this way data and model dynamics are combined to yield a four-dimensional analysis exactly satisfying the model equations. This analysis is the solution of the model's equations of motion defined by the optimal initial conditions chosen at t=−T. Therefore, provided T is larger than the adjustment time of the model, there should be no need for any initialization at the start of the forecast at t = 0.

This report describes some preliminary experiments which use highly simplified filtered and primitive equation models of an atmosphere with f-plane geometry. These simple models are used because of the substantial computational resources required by the minimization method. It is demonstrated that the method is stable in an assimilation cycle, is able to maintain an accurate estimate of the motion field from temperature observations alone and yields a small analysis error. Unfortunately, forecasts made from the four-dimensional analyses exhibit rapid error growth initially; as a result these forecasts are better than ordinary forecasts only for the first 24 h. Beyond 24 h both types of forecasts have the same skill.

Full access
Ross N. Hoffman

Abstract

A gridded surface wind analysis is obtained by minimizing an objective function, the magnitude of which measures the error made by the gridded analysis in fitting the SASS wind data, the conventional surface wind observations and the forecast surface wind field. The ambiguity of the SASS winds is then removed by choosing the alias closest to the analyzed wind. Because minimizing the objective function is a problem of nonlinear least-squares, the minimizing gridded analysis is not unique and a good first guess is necessary to assure convergence to a reasonable solution. A good first guess may be generated by performing the analysis in stages.

Illustrative results are shown for a limited region in the North Atlantic containing the QE II storm and for a limited amount (∼12 min) of data observed near the synoptic time 1200 GMT 10 September 1978. Within the SASS data swath the resulting gridded analysis is a reasonable representation of the surface wind and is not sensitive to the forecast surface wind field. The analysis is sensitive to wind directions reported by ships near the center of the storm. The wind circulation center present in the forecast is moved by the analysis. The resulting dealiased SASS wind directions are noisy.

Full access
Ross N. Hoffman

Abstract

No abstract available.

Full access
Ross N. Hoffman

Abstract

Variational analysis methods allow information from a variety of sources, including current observations and a priori statistics and constraints, to be combined by minimizing the lack of fit to the various sources of information. In this study, the ambiguity of the SASS winds is removed by a variational analysis method which combines the following information: a variety of current surface wind observations (radiosonde, ship, satellite scatterometer), earlier observations in the form of a forecast, smoothness constraints on the horizontal winds, its divergence and vorticity, and a dynamical constraint on the time rate of change of Vorticity of the surface wind. The constraints used are “weak” constraints in the sense of Sasaki. In an earlier work, constraints were not used. The scatterometer wind magnitudes are nearly unambiguous and are considered specially.

The lack of fit to data and constraints is measured by the so-called objective function. Here, a discrete form of the solution is assumed, the objective function is described in terms of discrete variables and a minimum is found by a conjugate gradient method. Global analyses are possible.

Compared to previous results, the use of constraints results in a more robust analysis procedure and produces better transitions between data-rich and data-poor regions, but the analyses, like all objective analyses, are still lacking common sense in some important respects.

The scatterometer data have been processed by two methods, one which bins and one which pairs the individual scatterometer values. Both data sets are analyzed for the case of an intense cyclone centered south of Japan at 0000 GMT 6 September 1978. Only slightly better results are obtained with the finer resolution winds produced by the pairing algorithm, although it is clear they contain far more detailed information.

Full access
Ross N. Hoffman

ABSTRACT

A one-dimensional (1D) analysis problem is defined and analyzed to explore the interaction of observation thinning or superobservation with observation errors that are correlated or systematic. The general formulation might be applied to a 1D analysis of radiance or radio occultation observations in order to develop a strategy for the use of such data in a full data assimilation system, but is applied here to a simple analysis problem with parameterized error covariances. Findings for the simple problem include the following. For a variational analysis method that includes an estimate of the full observation error covariances, the analysis is more sensitive to variations in the estimated background and observation error standard deviations than to variations in the corresponding correlation length scales. Furthermore, if everything else is fixed, the analysis error increases with decreasing true background error correlation length scale and with increasing true observation error correlation length scale. For a weighted least squares analysis method that assumes the observation errors are uncorrelated, best results are obtained for some degree of thinning and/or tuning of the weights. Without tuning, the best strategy is superobservation with a spacing approximately equal to the observation error correlation length scale.

Full access
Ross N. Hoffman
and
Christopher Grassotti

Abstract

A variational analysis method to detect and correct displacement and amplification errors in short-range forecasts of a data assimilation system is developed and tested. Collectively these errors are termed distortion errors. The method uses a variational approach to solve a nonlinear least squares estimation problem with side constraints to determine the distortion that alters an a priori background field to best fit the available observations. In this study, the data are Special Sensing Microwave/Imager (SSM/I) retrievals of integrated water vapor and the a priori background fields are analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF). In practice the background fields would be operational 6-h forecasts.

The necessary algorithms and methodologies were developed, implemented, and tested on a sufficient number of cases to demonstrate the utility of the method. Cases were selected that have noticeable features in the SSM/I vertically integrated water vapor fields. In all cases studied, the SSM/I data, together with the distortion representation of error, produces significant changes to the ECMWF analyses, reducing the variance of the difference between the analysis and SSM/I data by 45%–86%. Further work is suggested to examine impacts on objective analyses and subsequent numerical forecasts.

Full access
Thomas Nehrkorn
and
Ross N. Hoffman

Abstract

The inference of profiles of relative humidity from cloud data was investigated in a collocation study of 3DNEPH and radiosonde data over North America. Regression equations were developed for the first two EOFs of relative humidity, using vertically compacted and horizontally averaged 3DNEPH cloud cover values as predictors. The regression equations were found to have smaller errors than existing level-to-level cloud to humidity conversion techniques. However, no attempt was made to tune the existing methods for optimal performance.

Full access
Thomas Nehrkorn
and
Ross N. Hoffman

Abstract

A feature-based statistical method is investigated as a method of generating pseudoensembles of numerical weather prediction forecasts. The goal is to enhance or dress a single dynamical forecast or an ensemble of dynamical forecasts with many realistic perturbations so as to represent better the forecast uncertainty. The feature calibration and alignment method (FCA) is used to characterize forecast differences and to generate the additional ensemble members. FCA is unique in decomposing forecast errors or differences into phase, bias, and residual error or difference components. In a pilot study using 500-hPa geopotential height data, pseudoensembles of weather forecasts are generated from one deterministic forecast and perturbations obtained by randomly sampling FCA displacements based on a priori statistics and applying these displacements to the original deterministic forecast. Comparison with actual dynamical ensembles of 500-hPa geopotential height generated by ECMWF show that important features of the dynamical ensemble, such as the spatial patterns of the ensemble mean and variance, can be approximated by the FCA pseudoensemble. Ensemble verification statistics are presented for the dynamic and FCA ensemble and compared with those of simpler statistically based pseudoensembles. Some limitations of the FCA ensembles are noted, and mitigation approaches are discussed, with a view toward applying the method to mesoscale forecasts for dispersion modeling.

Full access