Search Results

You are looking at 1 - 10 of 52 items for

  • Author or Editor: Ross N. Hoffman x
  • All content x
Clear All Modify Search
Ross N. Hoffman

Abstract

No abstract available.

Full access
Ross N. Hoffman

The earth's atmosphere may be chaotic and very likely is sensitive to small perturbations. Certainly, very simple nonlinear dynamical models of the atmosphere are chaotic, and the most realistic numerical weather prediction models are very sensitive to initial conditions. Chaos implies that there is a finite predictability time limit no matter how well the atmosphere is observed and modeled. Extreme sensitivity to initial conditions suggests that small perturbations to the atmosphere may effectively control the evolution of the atmosphere, if the atmosphere is observed and modeled sufficiently well.

The architecture of a system to control the global atmosphere and the components of such a system are described. A feedback control system similar to many used in industrial settings is envisioned. Although the weather controller is extremely complex, the existence of the required technology is plausible in the time range of several decades.

While the concept of controlling the weather has often appeared in science fiction literature, this statement of the problem provides a scientific basis and a system architecture to actually implement global weather control. Large-scale weather control raises important legal and ethical questions. The nation that controls its own weather will perforce control the weather of other nations. Weather “wars” are conceivable. An international treaty may be required, limiting the use of weather control technology.

Full access
Ross N. Hoffman

Abstract

For a discretized deterministic model of the atmosphere, a single point in the model's phase space defines a complete trajectory. It is possible to choose a point which minimizes the differences between the model trajectory starting at the chosen point and all data observed during an analysis period (−Tt≤0). In this way data and model dynamics are combined to yield a four-dimensional analysis exactly satisfying the model equations. This analysis is the solution of the model's equations of motion defined by the optimal initial conditions chosen at t=−T. Therefore, provided T is larger than the adjustment time of the model, there should be no need for any initialization at the start of the forecast at t = 0.

This report describes some preliminary experiments which use highly simplified filtered and primitive equation models of an atmosphere with f-plane geometry. These simple models are used because of the substantial computational resources required by the minimization method. It is demonstrated that the method is stable in an assimilation cycle, is able to maintain an accurate estimate of the motion field from temperature observations alone and yields a small analysis error. Unfortunately, forecasts made from the four-dimensional analyses exhibit rapid error growth initially; as a result these forecasts are better than ordinary forecasts only for the first 24 h. Beyond 24 h both types of forecasts have the same skill.

Full access
Ross N. Hoffman

Abstract

The simulated climates of highly truncated nonlinear models based on the primitive equations (PE), balance equations (BE) and quasi-geostrophic (QG) equations are compared, in order to determine the effects of the filtering approximations. The models and numerical procedures are identical in all possible respects. At low forcing the QG and PE climates agree in most respects. At high forcing the QG model gives only a qualitatively correct simulation of the PE mean state and energy cycle. In contrast, the BE model is relatively successful at simulating, the PE climate.

Two attempts are made to get better simulations of the PE climate within the QG framework—the tuned and perturbed QG models. The tuned QG model is better than the untuned version at simulating the PE time mean model state, but the simulated energy cycle is not improved at all. In the perturbed QG model randomly generated perturbations, designed so that their statistics are similar to the statistics of the observed prediction errors, are added to the model state at regular intervals. The perturbed QG model is nearly as successful as the BE model at simulating the PE climate.

Full access
Ross N. Hoffman

Abstract

A gridded surface wind analysis is obtained by minimizing an objective function, the magnitude of which measures the error made by the gridded analysis in fitting the SASS wind data, the conventional surface wind observations and the forecast surface wind field. The ambiguity of the SASS winds is then removed by choosing the alias closest to the analyzed wind. Because minimizing the objective function is a problem of nonlinear least-squares, the minimizing gridded analysis is not unique and a good first guess is necessary to assure convergence to a reasonable solution. A good first guess may be generated by performing the analysis in stages.

Illustrative results are shown for a limited region in the North Atlantic containing the QE II storm and for a limited amount (∼12 min) of data observed near the synoptic time 1200 GMT 10 September 1978. Within the SASS data swath the resulting gridded analysis is a reasonable representation of the surface wind and is not sensitive to the forecast surface wind field. The analysis is sensitive to wind directions reported by ships near the center of the storm. The wind circulation center present in the forecast is moved by the analysis. The resulting dealiased SASS wind directions are noisy.

Full access
Ross N. Hoffman

Abstract

Variational analysis methods allow information from a variety of sources, including current observations and a priori statistics and constraints, to be combined by minimizing the lack of fit to the various sources of information. In this study, the ambiguity of the SASS winds is removed by a variational analysis method which combines the following information: a variety of current surface wind observations (radiosonde, ship, satellite scatterometer), earlier observations in the form of a forecast, smoothness constraints on the horizontal winds, its divergence and vorticity, and a dynamical constraint on the time rate of change of Vorticity of the surface wind. The constraints used are “weak” constraints in the sense of Sasaki. In an earlier work, constraints were not used. The scatterometer wind magnitudes are nearly unambiguous and are considered specially.

The lack of fit to data and constraints is measured by the so-called objective function. Here, a discrete form of the solution is assumed, the objective function is described in terms of discrete variables and a minimum is found by a conjugate gradient method. Global analyses are possible.

Compared to previous results, the use of constraints results in a more robust analysis procedure and produces better transitions between data-rich and data-poor regions, but the analyses, like all objective analyses, are still lacking common sense in some important respects.

The scatterometer data have been processed by two methods, one which bins and one which pairs the individual scatterometer values. Both data sets are analyzed for the case of an intense cyclone centered south of Japan at 0000 GMT 6 September 1978. Only slightly better results are obtained with the finer resolution winds produced by the pairing algorithm, although it is clear they contain far more detailed information.

Full access
Ross N. Hoffman

ABSTRACT

A one-dimensional (1D) analysis problem is defined and analyzed to explore the interaction of observation thinning or superobservation with observation errors that are correlated or systematic. The general formulation might be applied to a 1D analysis of radiance or radio occultation observations in order to develop a strategy for the use of such data in a full data assimilation system, but is applied here to a simple analysis problem with parameterized error covariances. Findings for the simple problem include the following. For a variational analysis method that includes an estimate of the full observation error covariances, the analysis is more sensitive to variations in the estimated background and observation error standard deviations than to variations in the corresponding correlation length scales. Furthermore, if everything else is fixed, the analysis error increases with decreasing true background error correlation length scale and with increasing true observation error correlation length scale. For a weighted least squares analysis method that assumes the observation errors are uncorrelated, best results are obtained for some degree of thinning and/or tuning of the weights. Without tuning, the best strategy is superobservation with a spacing approximately equal to the observation error correlation length scale.

Full access
Ross N. Hoffman and Thomas Nehrkorn

Abstract

Retrieving information from remotely sensed data and analyzing the resulting geophysical parameters on a regular grid may be combined using a variational analysis method. This approach is applicable to the problem of retrieving temperature and cloud parameters within a three-dimensional volume from observations of infrared radiances at several frequencies and locations. The feasibility of such a three-dimensional retrieval method is demonstrated using simulated HIRS2 data. The method is successful in fitting the radiance data with a three-dimensional temperature representation. When clouds are included in the problem the results are sensitive to the initial estimates of the cloud parameters.

Full access
Daniel Gombos and Ross N. Hoffman

Abstract

In Part I of this series on ensemble-based exigent analysis, a Lagrange multiplier minimization technique is used to estimate the exigent damage state (ExDS), the “worst case” with respect to a user-specified damage function and confidence level. Part II estimates the conditions antecedent to the ExDS using ensemble regression (ER), a linear inverse technique that employs an ensemble-estimated mapping matrix to propagate a predictor perturbation state into a predictand perturbation state. By propagating the exigent damage perturbations (ExDPs) from the heating degree days (HDD) and citrus tree case studies of Part I into their respective antecedent forecast state vectors, ER estimates the most probable antecedent perturbations expected to evolve into these ExDPs. Consistent with the physical expectations of a trough that precedes and coincides with the anomalously cold temperatures during the HDD case study, the ER-estimated antecedent 300-hPa geopotential height trough is approximately 59 and 17 m deeper than the ensemble mean at around the time of the ExDP as well as 24 h earlier, respectively. Statistics of the explained variance and from leave-one-out cross-validation runs indicate that the expected errors of these ER-estimated perturbations are smaller for the HDD case study than for the citrus tree case study.

Full access
Ross N. Hoffman and Christopher Grassotti

Abstract

A variational analysis method to detect and correct displacement and amplification errors in short-range forecasts of a data assimilation system is developed and tested. Collectively these errors are termed distortion errors. The method uses a variational approach to solve a nonlinear least squares estimation problem with side constraints to determine the distortion that alters an a priori background field to best fit the available observations. In this study, the data are Special Sensing Microwave/Imager (SSM/I) retrievals of integrated water vapor and the a priori background fields are analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF). In practice the background fields would be operational 6-h forecasts.

The necessary algorithms and methodologies were developed, implemented, and tested on a sufficient number of cases to demonstrate the utility of the method. Cases were selected that have noticeable features in the SSM/I vertically integrated water vapor fields. In all cases studied, the SSM/I data, together with the distortion representation of error, produces significant changes to the ECMWF analyses, reducing the variance of the difference between the analysis and SSM/I data by 45%–86%. Further work is suggested to examine impacts on objective analyses and subsequent numerical forecasts.

Full access