Search Results

You are looking at 1 - 10 of 53 items for

  • Author or Editor: Ross N. Hoffman x
  • Refine by Access: All Content x
Clear All Modify Search
Ross N. Hoffman

Abstract

A gridded surface wind analysis is obtained by minimizing an objective function, the magnitude of which measures the error made by the gridded analysis in fitting the SASS wind data, the conventional surface wind observations and the forecast surface wind field. The ambiguity of the SASS winds is then removed by choosing the alias closest to the analyzed wind. Because minimizing the objective function is a problem of nonlinear least-squares, the minimizing gridded analysis is not unique and a good first guess is necessary to assure convergence to a reasonable solution. A good first guess may be generated by performing the analysis in stages.

Illustrative results are shown for a limited region in the North Atlantic containing the QE II storm and for a limited amount (∼12 min) of data observed near the synoptic time 1200 GMT 10 September 1978. Within the SASS data swath the resulting gridded analysis is a reasonable representation of the surface wind and is not sensitive to the forecast surface wind field. The analysis is sensitive to wind directions reported by ships near the center of the storm. The wind circulation center present in the forecast is moved by the analysis. The resulting dealiased SASS wind directions are noisy.

Full access
Ross N. Hoffman

Abstract

No abstract available.

Full access
Ross N. Hoffman

Abstract

Variational analysis methods allow information from a variety of sources, including current observations and a priori statistics and constraints, to be combined by minimizing the lack of fit to the various sources of information. In this study, the ambiguity of the SASS winds is removed by a variational analysis method which combines the following information: a variety of current surface wind observations (radiosonde, ship, satellite scatterometer), earlier observations in the form of a forecast, smoothness constraints on the horizontal winds, its divergence and vorticity, and a dynamical constraint on the time rate of change of Vorticity of the surface wind. The constraints used are “weak” constraints in the sense of Sasaki. In an earlier work, constraints were not used. The scatterometer wind magnitudes are nearly unambiguous and are considered specially.

The lack of fit to data and constraints is measured by the so-called objective function. Here, a discrete form of the solution is assumed, the objective function is described in terms of discrete variables and a minimum is found by a conjugate gradient method. Global analyses are possible.

Compared to previous results, the use of constraints results in a more robust analysis procedure and produces better transitions between data-rich and data-poor regions, but the analyses, like all objective analyses, are still lacking common sense in some important respects.

The scatterometer data have been processed by two methods, one which bins and one which pairs the individual scatterometer values. Both data sets are analyzed for the case of an intense cyclone centered south of Japan at 0000 GMT 6 September 1978. Only slightly better results are obtained with the finer resolution winds produced by the pairing algorithm, although it is clear they contain far more detailed information.

Full access
Ross N. Hoffman

Abstract

The simulated climates of highly truncated nonlinear models based on the primitive equations (PE), balance equations (BE) and quasi-geostrophic (QG) equations are compared, in order to determine the effects of the filtering approximations. The models and numerical procedures are identical in all possible respects. At low forcing the QG and PE climates agree in most respects. At high forcing the QG model gives only a qualitatively correct simulation of the PE mean state and energy cycle. In contrast, the BE model is relatively successful at simulating, the PE climate.

Two attempts are made to get better simulations of the PE climate within the QG framework—the tuned and perturbed QG models. The tuned QG model is better than the untuned version at simulating the PE time mean model state, but the simulated energy cycle is not improved at all. In the perturbed QG model randomly generated perturbations, designed so that their statistics are similar to the statistics of the observed prediction errors, are added to the model state at regular intervals. The perturbed QG model is nearly as successful as the BE model at simulating the PE climate.

Full access
Ross N. Hoffman

The earth's atmosphere may be chaotic and very likely is sensitive to small perturbations. Certainly, very simple nonlinear dynamical models of the atmosphere are chaotic, and the most realistic numerical weather prediction models are very sensitive to initial conditions. Chaos implies that there is a finite predictability time limit no matter how well the atmosphere is observed and modeled. Extreme sensitivity to initial conditions suggests that small perturbations to the atmosphere may effectively control the evolution of the atmosphere, if the atmosphere is observed and modeled sufficiently well.

The architecture of a system to control the global atmosphere and the components of such a system are described. A feedback control system similar to many used in industrial settings is envisioned. Although the weather controller is extremely complex, the existence of the required technology is plausible in the time range of several decades.

While the concept of controlling the weather has often appeared in science fiction literature, this statement of the problem provides a scientific basis and a system architecture to actually implement global weather control. Large-scale weather control raises important legal and ethical questions. The nation that controls its own weather will perforce control the weather of other nations. Weather “wars” are conceivable. An international treaty may be required, limiting the use of weather control technology.

Full access
Ross N. Hoffman

ABSTRACT

A one-dimensional (1D) analysis problem is defined and analyzed to explore the interaction of observation thinning or superobservation with observation errors that are correlated or systematic. The general formulation might be applied to a 1D analysis of radiance or radio occultation observations in order to develop a strategy for the use of such data in a full data assimilation system, but is applied here to a simple analysis problem with parameterized error covariances. Findings for the simple problem include the following. For a variational analysis method that includes an estimate of the full observation error covariances, the analysis is more sensitive to variations in the estimated background and observation error standard deviations than to variations in the corresponding correlation length scales. Furthermore, if everything else is fixed, the analysis error increases with decreasing true background error correlation length scale and with increasing true observation error correlation length scale. For a weighted least squares analysis method that assumes the observation errors are uncorrelated, best results are obtained for some degree of thinning and/or tuning of the weights. Without tuning, the best strategy is superobservation with a spacing approximately equal to the observation error correlation length scale.

Full access
Ross N. Hoffman

Abstract

For a discretized deterministic model of the atmosphere, a single point in the model's phase space defines a complete trajectory. It is possible to choose a point which minimizes the differences between the model trajectory starting at the chosen point and all data observed during an analysis period (−Tt≤0). In this way data and model dynamics are combined to yield a four-dimensional analysis exactly satisfying the model equations. This analysis is the solution of the model's equations of motion defined by the optimal initial conditions chosen at t=−T. Therefore, provided T is larger than the adjustment time of the model, there should be no need for any initialization at the start of the forecast at t = 0.

This report describes some preliminary experiments which use highly simplified filtered and primitive equation models of an atmosphere with f-plane geometry. These simple models are used because of the substantial computational resources required by the minimization method. It is demonstrated that the method is stable in an assimilation cycle, is able to maintain an accurate estimate of the motion field from temperature observations alone and yields a small analysis error. Unfortunately, forecasts made from the four-dimensional analyses exhibit rapid error growth initially; as a result these forecasts are better than ordinary forecasts only for the first 24 h. Beyond 24 h both types of forecasts have the same skill.

Full access
Ross N. Hoffman
and
Thomas Nehrkorn

Abstract

Retrieving information from remotely sensed data and analyzing the resulting geophysical parameters on a regular grid may be combined using a variational analysis method. This approach is applicable to the problem of retrieving temperature and cloud parameters within a three-dimensional volume from observations of infrared radiances at several frequencies and locations. The feasibility of such a three-dimensional retrieval method is demonstrated using simulated HIRS2 data. The method is successful in fitting the radiance data with a three-dimensional temperature representation. When clouds are included in the problem the results are sensitive to the initial estimates of the cloud parameters.

Full access
Sid-Ahmed Boukabara
and
Ross N. Hoffman

Abstract

The Advanced Systems Performance Evaluation tool for NOAA (ASPEN) is developed to help support designing and evaluating existing and planned observing systems in terms of comparative assessment, trade-offs analysis, and design optimization studies. ASPEN is a dynamic tool that rapidly assesses the benefit and cost effectiveness of environmental data obtained from any set of observing systems, whether ground-based or space-based, whether an individual sensor or a collection of sensors. The ASPEN assessed cost effectiveness accounts for the level of ability to measure the environment, the cost(s) associated with acquiring these measurements, and the degree of usefulness of these measurements to users and applications. It computes both the use benefit, measured as a requirements-satisfaction metric, and the cost effectiveness (equal to the benefit-to-cost ratio). ASPEN provides a uniform interface to compare the performance of different observing systems and to capture the requirements and priorities of applications. This interface describes the environment in terms of geophysical observables and their attributes. A prototype implementation of ASPEN is described and demonstrated in this study to assess the benefits of several observing systems for a range of applications. ASPEN could be extended to other types of studies, such as assessing the cost effectiveness of commercial data to applications in all the NOAA mission service areas, and ultimately to societal application areas, and thereby become a valuable addition to the observing systems assessment toolbox.

Full access
Thomas Nehrkorn
and
Ross N. Hoffman

Abstract

The inference of profiles of relative humidity from cloud data was investigated in a collocation study of 3DNEPH and radiosonde data over North America. Regression equations were developed for the first two EOFs of relative humidity, using vertically compacted and horizontally averaged 3DNEPH cloud cover values as predictors. The regression equations were found to have smaller errors than existing level-to-level cloud to humidity conversion techniques. However, no attempt was made to tune the existing methods for optimal performance.

Full access