Search Results

You are looking at 1 - 10 of 10 items for

  • Author or Editor: R. S. Seaman x
  • Refine by Access: All Content x
Clear All Modify Search
R. S. Seaman

Abstract

No abstract available.

Full access
R. S. Seaman

Abstract

The parameters of the Barnes objective analysis scheme are often chosen on the basis of a desired frequency response, but they can also be chosen using the criterion of the theoretical root-mean-square interpolation error (E). Using the latter criterion, it is shown how the parameters of a common two-parameter Barnes implementation can be optimized for any specified irregular observational distribution. The problem is then generalized by means of design curves that enable the parameters to be chosen according to (i) average data spacing relative to the correlation coefficient function length scale of the field being analyzed, and (ii) the observational error variance relative to the variance of the true field (noise-to-signal ratio).

A large set of real data was analyzed using parameters chosen on the basis of interpolation theory. The analyses were assessed by comparison against a set of withheld data. The result suggests that minimum E is a satisfactory criterion for objectively choosing the Barnes parameters when the statistical properties of the true field and of the observational error are known in advance. It is also shown that the chosen two-parameter Barnes implementation is robust, in the sense that a large region of parameter space corresponds to values of E only slightly above the minimum.

Full access
R. S. Seaman

Abstract

The dependence of the root-mean-square analysis error of an observed element, its gradient and its Laplacian upon 1) observational density, 2) observational error and 3) spatial correlation of observational error have been assessed using optimum interpolation theory. The results have been generalized by scaling the observational separation s according to a length scale parameter L, which corresponds to the Gaussian population spatial autocorrelation function μ(s)=exp(−s 2 L −2) of the forecast error of the observed element. The responses of different synoptic regimes (subpopulations) were discriminated according to the sub-population length-scale parameter. The number of observations considered at a grid point was limited to either 12 or 4.

It has been shown that 1) an increasing spatial correlation of observational error has a different effect on the analysis errors of absolute and differential quantities, and 2) that the point of diminishing returns, below which an increasing observational density produces little improvement in analysis accuracy, is rather sensitive to the subpopulation length-scale parameter. The results highlight the necessity, for network planning, of considering the relative importance of the absolute value and differential characteristics of an observed element, and the response of defined “extreme” synoptic regimes in addition to the gross population response.

Some of the experiments were repeated with an inverse polynominal autocorrelation function instead of the Gaussian. The results suggest that if, as suggested by some climatologically based autocorrelations, the former function agrees better with observed temperature data, then the use of the Gaussian function may tend to underestimate the benefit of high-resolution remote soundings.

Full access
D. J. Gauntlett
and
R. S. Seaman

Abstract

An attempt has been made to isolate some of the problems likely to be associated with the practical implementation of four-dimensional data assimilation schemes in the Southern Hemisphere. In particular, the requirement for a reference-level specification over the Southern Hemisphere, and the importance of assimilation frequency (i.e., the period between data insertions) are investigated.

The assimilation scheme used consists of three components: a “sigma” surface analysis model to “insert” data as they become available, an initialization module of the Nitta-Hovermale type to remove high-frequency inertio-gravitational oscillations, and a multi-level primitive equation model to “advect” the assimilated atmosphere state forward in time. In all experiments, real data consisting of both the conventional and satellite-derived type are used. Verifications concentrate on the synoptic verisimilitude of the assimilation process, and where possible, the impact of various assimilation procedures on subsequent numerical prognosis.

Results underline the critical importance of reference-level pressure in the scheme evaluated. There is also some suggestion of improved performance when the assimilation frequency is increased.

Full access
G. A. Mills
and
R. S. Seaman

Abstract

A new limited-area data assimilation system has been developed in the BMRC for operational use by the Australian Bureau of Meteorology. The system analyzes deviations from a primitive equations model forecast, using two-dimensional univariate statistical interpolation (SI) to analyze mass, and three-dimensional univariate SI to analyze wind data. Mass and wind increment analyses may mutually influence the other using variational techniques.

Analysis increments are vertically interpolated to prognosis model sigma surfaces, added to the forecast variables, and the model integrated forward to the next analysis time. This ongoing analysis-forecast cycle is now being implemented operationally.

This paper describes in detail the analysis methodology, and presents results from a 17-day trial period. The analyses are compared with operationally prepared analyses for the same period individually, as means, and by data fitting statistics. It is shown that the assimilated analyses have stronger jet streams and greatly improved detail in the moisture analyses. It is also shown that vertical motion patterns in the guess fields are preserved through the analysis initialization phase of the assimilation cycle, and that these vertical motion fields correlate well with the areas of cloud seen in satellite imagery.

Prognoses from this trial period show a much more rapid spinup of forecast rainfall rate than did a series of control forecasts based on operational analyses, and both mean rainfall for the 17-day period and individual cases are presented to demonstrate improved skill of forecasts from the assimilated analyses. Objective verification of mass-field forecasts showed considerable sensitivity of the forecasts to the particular set of bogus mean sea level pressure data used in the analysis; however, preliminary verification statistics from the first 15 days of operational parallel running showed that the assimilation system produced forecasts of similar skill to operational forecasts of MSLP at 24 hours, but greater skill at the upper levels, and had greater skill at all levels for the 36-hour forecast.

Full access
S. Mark Leidner
,
David R. Stauffer
, and
Nelson L. Seaman

Abstract

Few data are available over the world’s oceans to characterize the initial atmospheric state in numerical models. Objective analysis in these regions is largely based on forecast fields obtained from a global model and used as the background (“first guess”). Unfortunately, global models often do not resolve the marine boundary layer (MBL) structure, which is important for simulating stratus clouds, coastal zone circulations, and electromagnetic wave propagation. Furthermore, initialization of the MBL in the coastal zone and data-sparse oceanic regions poses a challenging mesoscale modeling problem. The goal of this study, therefore, is to improve warm-season short-term mesoscale numerical prediction of California coastal zone meteorology by improving the model initial conditions in the coastal zone and offshore data-void regions. Initialization strategies tested include standard static and dynamic techniques and a new marine boundary layer initialization scheme that uses a dynamic initialization based on the remarkably consistent summertime marine-layer climatology of the eastern Pacific Ocean.

The model used in this study is the Pennsylvania State University–National Center for Atmospheric Research fifth-generation Mesoscale Model (MM5). Experiments were performed for a typical summertime case (3–4 Aug 1990) to determine an initialization strategy suitable for coastal zone forecasting over the northeast Pacific. The meteorology in this case was dominated by quasi-stationary synoptic-scale high pressure over the ocean. Results from the model experiments were verified using 6-hourly coastal rawinsonde observations and visible range satellite cloud imagery.

More accurate initial conditions were obtained by using dynamic initialization compared to static initialization. The most accurate initialization and short-range model forecasts were produced by assimilating a combination of observed data over land and climatological information offshore during the 12-h preforecast period. Through the 24-h forecast period, errors in the coastal zone PBL depth and marine inversion strength were reduced by 65% and 41%, respectively, compared to the static-initialization control experiments. Without proper initialization of the offshore MBL, coastal zone forecasts degraded with time due to the long timescale of physical processes responsible for generating the MBL structure over cold, low-latitude oceans. Therefore, improvement of the model initial conditions in the California coastal zone by assimilation of climatological information offshore in combination with observed conditions near the coast proved to be an effective strategy for increasing short-range forecast accuracy.

Full access
David R. Stauffer
,
Nelson L. Seaman
, and
Francis S. Binkowski

Abstract

A four-dimensional data assimilation (FDDA) scheme based on Newtonian relaxation or nudging has been developed and evaluated in the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) Limited-Area Mesoscale Model. It was shown in Part I of this study that continuous assimilation of standard-resolution rawinsonde observations throughout a model integration, rather than at only the initial time, can successfully limit large model error growth (amplitude and phase errors) while the model maintains intervariable consistency and generates realistic mesoscale structures not resolved by the data. The purpose of this paper is to further refine the previously reported FDDA strategy used to produce “dynamic analyses” of the atmosphere by investigating the effects of data assimilation within the planetary boundary layer (PBL).

The data used for assimilation include conventional synoptic-scale rawinsonde data and mesoalpha-scale surface data. The main objective of this study is to determine how to effectively utilize the combined strength of these two simple data systems while avoiding their individual weaknesses. Ten experiments, which use a 15-layer version of the model, are evaluated for two midlatitude, real-data cases.

It is found that the homogenizing effect of vertical mixing during free convective conditions allows the three-hourly surface-layer wind and mixing ratio observations to be applied throughout the model PBL according to an idealized conceptual model of boundary-layer structure. Single-level surface temperature observations, however, are often poorly representative of the boundary layer as a whole (e.g., shallow superadiabatic layers, nocturnal inversions), and are more attractive for FDDA applications when additional vertical profile information is available. Assimilation of surface wind and moisture data throughout the model PBL generally showed a positive impact on the simulated precipitation by better reserving the low-level structure and movement of systems (e.g., cyclones, fronts) during the 12-h periods bracketed by the standard rawinsonde data. Improved precipitation simulations due to assimilation of surface data are also possible even in cases with weak large-scale forcing, because a significant portion of the vertically integrated moisture convergence often occurs in the boundary layer. Overall, the best dynamic analyses of precipitation, PBL depth, surface-layer temperature and tropospheric mass and wind fields were obtained by nudging to analyses of rawinsonde wind, temperature, and moisture above the model PBL and to analyses of surface-layer wind and moisture within the model PBL.

Full access
T. L. Hart
,
W. Bourke
,
P. J. Steinle
, and
R. S. Seaman

Abstract

Increasing the resolution of satellite soundings of temperature and moisture from 500 to 250 km is found to be beneficial for large-scale numerical weather prediction for the Southern Hemisphere, particularly for winter. The impact for the Northern Hemisphere was generally not significant, although mostly positive in sign. The results are based on parallel experiments with the Australian Bureau of Meteorology's global analysis and prediction system, with configurations differing only in the use of either the 500- or 250-km soundings. Ten 5-day forecasts for both a January and a July period were carried out.

Full access
Anthony J. Schroeder
,
David R. Stauffer
,
Nelson L. Seaman
,
Aijun Deng
,
Annette M. Gibbs
,
Glenn K. Hunter
, and
George S. Young

Abstract

An automated, rapidly relocatable nowcasting and prediction system, whose cornerstone is the full-physics, nested-grid, nonhydrostatic fifth-generation Pennsylvania State University–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model (MM5), has been under development at the Pennsylvania State University since the late 1990s. In the applications presented in this paper, the Rapidly Relocatable Nowcast-Prediction System (RRNPS) provides a continuous stream of highly detailed nowcasts, defined here as gridded meteorological fields produced by a high-resolution mesoscale model assimilating available observations and staying just ahead of the clock to provide immediately available current meteorological conditions. The RRNPS, configured to use 36-, 12-, and 4-km nested domains, is applied over the Great Plains for 18 case days in August 2001, over the East Coast region for 8 case days in April 2002, and for 12 case days during the winter and summer of 2003. The performance of the RRNPS is evaluated using subjective and statistical methods for runs with and without the use of continuous four-dimensional data assimilation (FDDA). A statistical evaluation of the dependence of RRNPS skill on the length of model integration yields further insight into the value added by FDDA in RRNPS nowcasts. It must be emphasized that unlike typical operational analysis systems, none of the current data are used in the nowcasts since the nowcasts are made available just ahead of the clock for immediate use. Because none of the verification data are assimilated into the RRNPS at the time of verification, this evaluation is a true test of the time-integrated effects of previous FDDA on current model solutions. Furthermore, the statistical evaluations also utilize independent data completely withheld from the system at all times.

Evaluation of the RRNPS versus observations on the 4- and 12-km grids shows that there is little difference in statistical skill between the two resolutions for the two application regions. However, subjective case evaluations indicate that mesoscale detail is added to the wind and mass fields on the 4-km domain of the RRNPS as compared to the coarser 12-km domain. Statistics suggest that 4-km resolution provides slightly more accurate meteorology for the domain including complex terrain and coastlines. The statistics also show that the use of continuous FDDA in a high-resolution mesoscale model improves the accuracy of the RRNPS nowcasts, and that this unique nowcast prediction system provides immediately available forecast-analysis products that are comparable or superior to those produced at operational centers, especially for the surface and the boundary layer. Finally, the RRNPS is also designed to run locally and on demand in a highly automated mode on modest computing platforms (e.g., a dual-processor PC) with potentially very limited data resources and nonstandard data communications.

Full access
David R. Stauffer
,
Nelson L. Seaman
,
Glenn K. Hunter
,
S. Mark Leidner
,
Annette Lario-Gibbs
, and
Saffet Tanrikulu

Abstract

This paper describes a new methodology developed to provide objective guidance for cost-effective siting of meteorological observations on the mesoscale for air quality applications. This field-coherence technique (FCT) is based on a statistical analysis of the mesoscale atmospheric structure defined by the spatial and temporal“coherence” in the meteorological fields. The coherence, as defined here, is a measure of the distance scale over which there is temporal consistency in the spatial structure within a variable field. It indicates how well a measurement taken at one location can be used to estimate the value of that field at another location at a given analysis time. The FCT postulates that, the larger the field coherence is, the fewer measurement sites are needed to resolve adequately the dominant characteristics of that field.

Proof of concept was demonstrated using real data from an extensive field-program database over the San Joaquin Valley in the summer of 1990. The FCT next was applied to numerical model results for the same period, which produced similar guidance. The transferability of the methodology from real data to numerical model results having been demonstrated, the FCT then was applied in a model-based study over California’s South Coast Air Basin to contribute in the design of a new field program, the Southern California Ozone Study (SCOS97). Interpretation of the FCT results mostly corroborated a preliminary field-program design produced by the design team and based on past experience, subjective evaluation of historical datasets, and other considerations. However, the FCT results also led the design team to make several changes, which were confirmed by experts familiar with the meteorological behavior of the region and were included in the final SCOS97 field-program plan.

Full access