Search Results

You are looking at 1 - 10 of 24 items for

  • Author or Editor: W. Stern x
  • Refine by Access: All Content x
Clear All Modify Search
W. Stern and K. Miyakoda

Abstract

Assuming that SST provides the major lower boundary forcing for the atmosphere, observed SSTs are prescribed for an ensemble of atmospheric general circulation model (GCM) simulations. The ensemble consists of 9 “decadal” runs with different initial conditions chosen between 1 January 1979 and 1 January 1981 and integrated about 10 years. The main objective is to explore the feasibility of seasonal forecasts using GCMS. The extent to which the individual members of the ensemble reproduce the solutions of each other (i.e., reproducibility) may be taken as an indication of potential predictability. In addition, the ability of a particular GCM to produce realistic solutions, when compared with observation, must also be addressed as part of the predictability problem.

A measure of reproducibility may be assessed from the spread among ensemble members. A normalized spread index, σns, can be defined at any point in space and time, as the variability of the ensemble (σn) normalized by the climatological seasonal variability (σs). In the time mean it is found that the reproducibility is significantly below unity for certain regions. Low values of the spread index are seen generally in the Tropics, whereas the extratropies does not exhibit a high degree of reproducibility. However, if one examines plots in time of seasonal mean σns for the U.S. region, for example, it is found that for certain periods this index is much less than unity, perhaps implying “occasional potential predictability.” In this regard, time series of ensemble mean soil moisture and precipitation over the United States are compared with corresponding observations. This study reveals some marginal skill in simulating periods of drought and excessive wetness over the United States during the 1980s (i.e., the droughts of 1981 and 1988 and the excessive wetness during the 1982/83 El Niño). In addition, by focusing on regions of better time-averaged reproducibility-that is, the southeast United States and northeast Brazil-a clearer indication of a relationship between good reproducibility and seasonal predictability seems to emerge.

Full access
E. A. Fitzpatrick and W. R. Stern

Abstract

Over irrigated crops of cotton grown in a dry monsoonal environment, net radiation was measured daily over a period of one year together with total radiation, duration of sunshine, and wet and dry bulb screen temperature. Daily estimates of total, effective terrestrial, and net radiation were obtained from several empirical relationships, and these were compared with measured values. The reliability of alternative modes of estimating these components of the radiation balance was assessed.

When estimating total radiation from relative duration of sunshine, a hyperbolic relation was found more effective in accommodating low values than the commonly used linear relationship.

When estimating effective terrestrial radiation from a relationship used by Penman, it was necessary to derive constants appropriate to this environment using data an clear days. Effective terrestrial radiation was also estimated using a relationship based upon a proposal by Swinbank which does not include a vapor pressure term. Although wet-season estimates were found to be in significantly better agreement with observation when vapor pressure was included, satisfactory estimates can nevertheless be obtained without vapor pressure data. Significantly better estimates were obtained when total radiation was used rather than relative duration of sunshine in accounting for the effect of cloudiness on the downward long-wave radiation flux.

Errors in the estimated effective terrestrial radiation were found to be the major contributors to inaccuracy in the estimation of net radiation.

Full access
W. L. Torgeson and S. C. Stern

Abstract

The design and development of an aircraft-borne isokinetic sampling device for collecting, tropospheric aerosol particles is described. The device incorporates a two-stage impactor and a back-up filter for collection and fractionation of atmospheric particles.

Full access
A. Navarra, W. F. Stern, and K. Miyakoda

Abstract

Spectral atmospheric general circulation models (GCMS) have been used for many years for the simulation and prediction of the atmospheric circulation, and their value has been widely recognized. Over the years, however, some deficiencies have been noticed. One of the major drawbacks is the inability of the spectral spherical harmonies transform to represent discontinuous features, resulting in Gibbs oscillations. In particular, precipitation and cloud fields present annoying ripple patterns, which may obscure true drought episodes in climate runs. Other fields, such as the surface winds along the Andes, are also plagued by the fictitious oscillations. On the other hand, it is not certain to what extent the large-scale flow may be affected. An attempt is made in this paper to alleviate this problem by changing the spectral representation of the fields in the GCM. The technique is to apply various filters to reduce the Gibbs oscillations. Lanczos and Cesaro filters are tested for both one and two dimensions. In addition, for two-dimensional applications an isotropic filter is tested. This filter is based on the Cesaro summation principle with a constraint on the total wavenumber. At the end, two-dimensional physical space filters are proposed that can retain high-mountain peak values. Two applications of these filters are presented.

In the first application the method is applied to the orography field by filtering out sharp gradients or discontinuities. The numerical results with this method show some improvement in the cloud and precipitation fields, along with some improvement of the surface wind pattern, resulting in an overall better simulation.

In the second application, a Gibbs reduction technique is applied to the condensation process. In this paper the moist-adiabatic adjustment scheme is used for the cumulus parameterization, in addition to large-scale condensation. Numerical results with this method to reduce Gibbs oscillations due to condensation show some improvement in the distribution of rainfall, and the procedure significantly reduces the need for negative filling of moisture. Currently, however, this approach is only partially successful. The negative moisture area at high latitudes can be, to some extent, controlled by an empirical procedure, but the filter approach is not sophisticated enough to satisfactorily remove the complex Gibbs oscillations present in the condensation field.

Full access
W. F. Stern and J. J. Ploshay

Abstract

Major revisions to the Geophysical Fluid Dynamics Laboratory's (GFDL) continuous data-assimilation system have been implemented and tested. Shortcomings noted during the original processing of data from FGGE [First GARP (Global Atmospheric Research Program) Global Experiment) served as the basis for thew improvements. This new system has been used to reanalyze the two FGGE special observing periods. The main focus here will be on assessing the changes to the assimilation system using comparisons of rerun test results with results from the original FGGE processing.

The key new features in the current system include: a reduction in the assimilation cycle from 12 to 6 h; the use of a 6-h forecast first guess for the OI (optimum-interpolation analysis) as opposed to the previous use of persistence as a first guess; an extension of the OI search range from 250 to 500 km with an increase in the maximum number of observations used per analysis point from 8 to 12; the introduction of incremental linear normal-mode initialization, eliminating the periodic nonlinear normal-mode initialization; and an increase in the horizontal resolution of the assimilating model from 30 waves to 42 waves, rhomboidally truncated.

Tests of the new system show a significant reduction in the level of noise, improved consistency between mass and momentum analyses, and a better fit of the analyses to observations. In addition, the new system has demonstrated a greater ability to resolve rapidly moving and deepening transient features, with an indication of less rejection of surface pressure data.

In addition to the quantities archived during the original FGGE data processing, components of diabatic heating from the assimilating model have also been archived. They should be used with caution to the extent that they reflect model bias and spinup in addition to real features of the general circulation.

Full access
R. W. Lindsay and H. L. Stern

Abstract

NASA's RADARSAT Geophysical Processor System (RGPS) uses sequential synthetic aperture radar (SAR) images to track the trajectories of some 30 000 points on the Arctic sea ice for periods of up to 6 months. Much of the Arctic basin is imaged and tracked every 3 days. The result is a highly detailed picture of how the sea ice moves and deforms. The points are initially spaced 10 km apart and are organized into four-cornered cells. The area and the strain rates are calculated for each cell for each new observation of its corners. The accuracy of the RGPS ice tracking, area changes, and deformation estimates is needed to make the dataset useful for analysis, model validation, and data assimilation. Two comparisons are made to assess the accuracy. The first compares the tracking performed at two different facilities (the Jet Propulsion Laboratory in Pasadena, California, and the Alaska SAR Facility in Fairbanks, Alaska), between which the primary difference is the operator intervention. The error standard deviation of the tracking, not including geolocation errors, is 100 m, which is the pixel size of the SAR images. The second comparison is made with buoy trajectories from the International Arctic Buoy Program. The squared correlation coefficient for RGPS and buoy displacements is 0.996. The median magnitude of the displacement differences is 323 m. The tracking errors give rise to error standard deviations of 0.5% day−1 in the divergence, shear, and vorticity. The uncertainty in the area change of a cell is 1.4% due to tracking errors and 3.2% due to resolving the cell boundary with only four points. The uncertainties in the area change and deformation invariants can be reduced substantially by averaging over a number of cells, at the expense of spatial resolution.

Full access
R. W. Lindsay and H. L. Stern

Abstract

A new Lagrangian sea ice model for the Arctic Ocean has been developed to accommodate the assimilation of integral measures of ice displacement over periods of days or weeks. The model is a two-layer (thick ice and open water) dynamic model with 600–700 cells spaced at roughly 100 km. The force balance equation is solved for each cell with standard wind and water stress terms, a Coriolis term, and an internal ice stress term. The internal stress is found using a viscous–plastic rheology and an elliptical yield curve. The strain rate is determined by the “Smoothed Particle Hydrodynamics” formalism, which determines the spatial derivatives of the velocity by a weighted summation of the velocities of adjacent cells. A length scale of 150 km is used. The model is driven with observed geostrophic winds and climatological-mean ocean currents. Ice growth and melt are found from a seasonal- and thickness-based lookup table because the current focus is on the model dynamics. The model ice velocity simulations are compared to observed buoy motion. Over the 5-yr period 1993–97 the correlation of the daily averaged model velocity with buoy velocities is R = 0.76 (N = 42 553, rms difference = 0.072 m s−1, speed bias = +0.009 m s−1, angle bias = 8.0°). This compares favorably with the correlation of a state-of-the-art Eulerian coupled ice–ocean model, where R = 0.74 in the summer and 0.66 in the winter over the same 5-yr period.

Full access
F. Vitart, J. L. Anderson, and W. F. Stern

Abstract

The present study examines the simulation of the number of tropical storms produced in GCM integrations with a prescribed SST. A 9-member ensemble of 10-yr integrations (1979–88) of a T42 atmospheric model forced by observed SSTs has been produced; each ensemble member differs only in the initial atmospheric conditions. An objective procedure for tracking-model-generated tropical storms is applied to this ensemble during the last 9 yr of the integrations (1980–88). The seasonal and monthly variations of tropical storm numbers are compared with observations for each ocean basin.

Statistical tools such as the Chi-square test, the F test, and the t test are applied to the ensemble number of tropical storms, leading to the conclusion that the potential predictability is particularly strong over the western North Pacific and the eastern North Pacific, and to a lesser extent over the western North Atlantic. A set of tools including the joint probability distribution and the ranked probability score are used to evaluate the simulation skill of this ensemble simulation. The simulation skill over the western North Atlantic basin appears to be exceptionally high, particularly during years of strong potential predictability.

Full access
J. J. Ploshay, W. F. Stern, and K. Miyakoda

Abstract

The reanalysis of FGGE (First GARP (Global Atmospheric Research Program) Global Experiment) data for 128 days during two special observing periods has been performed, using an improved data-assimilation system and the revised FGGE level 11 dataset. The data-assimilation scheme features forward continuous (in lime) data injection in both the original and the new systems. However, the major revisions in the new system include a better first guess and a more efficient dynamical balancing for the assimilation of observed data. The results of the implementation of this system are assessed by intercomparisons among the new FGGE analysis of other institutions such as ECMWF (European Centre for Medium-Range Weather Forecasts) and NMC (National Meteorological Center, Washington, D.C.), and also the original GFDL (Geophysical Fluid Dynamics Laboratory) analysis. The quality of the new GFDL analysis is now comparable to those of the other two institutions. However, the moisture analysis appears to be appreciably different, suggesting that the cumulus convection parameterizations and the boundary-layer moisture fluxes in the models are responsible for this discrepancy.

A detailed investigation of the results has been carried out by comparing the analyses with radiosonde observations. This verification reveals that temperature and wind differences have been reduced considerably from the original to the new GFDL analysis; they are now competitive with those of ECMWF and NMC, while with regard to the geopotential height, differences of the GFDL reanalysis are larger than the original GFDL as well as the ECMWF and the NMC. A comparative study is also made with UCLA analyses over Asia in connection with the Indian monsoon. The results indicate that the qualities of both analyses are comparable. The capability of representing Madden-Julian oscillations in the reanalysis and in the ECMWF and old GFDL analysis is investigated by comparing with satellite observations. It is revealed that these oscillations are successfully reproduced by the new analysis; however, the agreement with the satellite data is not quite satisfactory. The utilization of satellite-observed wind (satobs) and aircraft data (aireps) in the data assimilation needs particular care. It appears that the quality control of these data in the GFDL reanalysis is too restrictive; in other words, the toss-out criterion of wind data is too small. A consequence of the failure to accept some single-level data turns out to be a fairly large discrepancy in representing the maximum wind speed in the analysis. It is also discussed that the current forward continuous-injection scheme is not adequate to obtain diabatic quantities for the archive.

Full access
D. E. Waliser, K. M. Lau, W. Stern, and C. Jones

The objective of this study is to estimate the limit of dynamical predictability of the Madden–Julian oscillation (MJO). Ensembles of “twin” predictability experiments were carried out with the NASA Goddard Laboratory for the Atmospheres (GLA) atmospheric general circulation model (AGCM) using specified annual cycle SSTs. Initial conditions were taken from a 10-yr control simulation during periods of strong MJO activity identified via extended empirical orthogonal function (EOF) analysis of 30–90-day bandpassed tropical rainfall. From this analysis, 15 cases were chosen when the MJO convective center was located over the Indian Ocean, Maritime Continent, western Pacific Ocean, and central Pacific Ocean, respectively, making 60 MJO cases in total. In addition, 15 cases were selected that exhibited very little to no MJO activity. Two different sets of small random perturbations were added to these 75 initial states. Simulations were then performed for 90 days from each of these 150 perturbed initial conditions. A measure of potential predictability was constructed based on a ratio of the signal associated with the MJO, in terms of rainfall or 200-hPa velocity potential (VP200), and the mean-square error between sets of twin forecasts. This ratio indicates that useful predictability for this model's MJO extends out to about 25–30 days for VP200 and to about 10–15 days for rainfall. This is in contrast to the timescales of useful predictability associated with persistence forecasts or forecasts associated with daily “weather” variations, which in either case extend out only to about 10–15 days for VP200 and 8–10 days for rainfall. The predictability measure shows modest dependence on the phase of the MJO, with greater predictability for the convective phase at short (< ~5 days) lead times and for the suppressed phase at longer (> ~15 days) lead times. In addition, the predictability of intraseasonal variability during periods of weak MJO activity is significantly diminished compared to periods of strong MJO activity. The implications of these results as well as their associated model and analysis caveats are discussed.

Full access