Search Results

You are looking at 31 - 40 of 41 items for

  • Author or Editor: Robert E. Livezey x
  • Refine by Access: All Content x
Clear All Modify Search
Anthony G. Barnston, Robert E. Livezey, and Michael S. Halpert

Abstract

A possible relationship between the phase of the Quasi-Biennial Oscillation (QBO) and the effect of the Southern Oscillation (SO) on the January-February climate in the Northern Hemisphere is examined. Findings suggest a preference for the tropical/Northern Hemisphere (TNH) circulation pattern in response to anomalies in the SO in east QBO phase years, and for the Pacific/North American (PNA) pattern in west QBO phase years. This extends previous findings relating the strength of the TNH pattern to tropical Pacific sea surface temperature during ENSO episodes.

This differentiation has fairly clear-cut implications for the January-February United States surface temperature anomaly pattern when a low (high) SO episode is in progress. The TNH emphasizes warmth (cold) in the Great Lakes/western Midwest; whereas the PNA induces a generally higher amplitude pattern, emphasizing cold (warmth) in the Southeast and warmth (cold) in the western third of the country. The SO-climate relationships appear approximately linear for each of the two QBO phases. A hypothetical physical mechanism through which this process might operate is briefly mentioned.

Full access
John E. Janowiak, Arnold Gruber, C. R. Kondragunta, Robert E. Livezey, and George J. Huffman

Abstract

The Global Precipitation Climatology Project (GPCP) has released monthly mean estimates of precipitation that comprise gauge observations and satellite-derived precipitation estimates. Estimates of standard random error for each month at each grid location are also provided in this data release. One of the primary intended uses of this dataset is the validation of climatic-scale precipitation fields that are produced by numerical models. Nearly coincident with this dataset development, the National Centers for Environmental Prediction and the National Center for Atmospheric Research have joined in a cooperative effort to reanalyze meteorological fields from the present back to the 1940s using a fixed state-of-the-art data assimilation system and large input database.

In this paper, monthly accumulations of reanalysis precipitation are compared with the GPCP combined rain gauge–satellite dataset over the period 1988–95. A unique feature of this comparison is the use of standard error estimates that are contained in the GPCP combined dataset. These errors are incorporated into the comparison to provide more realistic assessments of the reanalysis model performance than could be attained by using only the mean fields. Variability on timescales from intraseasonal to interannual are examined between the GPCP and reanalysis precipitation. While the representation of large-scale features compares well between the two datasets, substantial differences are observed on regional scales. This result is not unexpected since present-day data assimilation systems are not designed to incorporate observations of precipitation. Furthermore, inferences of deficiencies in the reanalysis precipitation should not be projected to other fields in which observations have been assimilated directly into the reanalysis model.

Full access
Robert E. Livezey, Michiko Masutani, Ants Leetmaa, Hualan Rui, Ming Ji, and Arun Kumar

Abstract

A prominent year-round ensemble response to a global sea surface temperature (SST) anomaly field fixed to that for January 1992 (near the peak of a major warm El Niño–Southern Oscillation episode) was observed in a 20-yr integration of the general circulation model used for operational seasonal prediction by the U.S. National Weather Service. This motivated a detailed observational reassessment of the teleconnections between strong SST anomalies in the central equatorial Pacific Ocean and Pacific–North America region 700-hPa heights and U.S. surface temperatures and precipitation. The approach used consisted of formation of monthly mean composites formed separately from cases in which the SST anomaly in a key area of the central equatorial Pacific Ocean was either large and positive or large and negative. Extensive permutation tests were conducted to test null hypotheses of no signal in these composites. The results provided a substantial case for the presence of teleconnections to either the positive- or negative-SST anomalies in every month of the year. These signals were seasonally varying (sometimes with substantial month to month changes) and, when present for both SST-anomaly signs in a particular month, usually were not similarly phased patterns of opposite polarity (i.e., the SST–teleconnected variable relationships were most often nonlinear).

A suite of 13 45-yr integrations of the same model described above was run with global SST analyses reconstructed from the observational record. Corresponding composites from the model were formed and compared visually and quantitatively with the high-confidence observational signals. The quantitative comparisons included skill analyses utilizing a decomposition that relates the squared differences between two maps to phase correspondence and amplitude and bias error terms and analyses of the variance about composite means. For the latter, in the case of the model runs it was possible to estimate the portions of this variance attributable to case to case variation in SSTs and to internal variability. Comparisons to monthly mean maps and analyses of variance for the 20-yr run with SSTs fixed to January 1992 values were also made.

The visual and quantitative comparisons all revealed different aspects of prominent model systematic errors that have important implications for the optimum exploitation of the model for use in prediction. One of these implications was that the current model’s ensemble responses to SST forcing will not be optimally useful until after nonlinear correction of SST-field-dependent systematic errors.

Full access
Thomas R. Karl, Robert E. Livezey, and Edward S. Epstein

A long-time series (1895–1984) of mean areally averaged winter temperatures in the contiguous United States depicts an unprecedented spell of abnormal winters beginning with the winter of 1975–76. Three winters during the eight-year period, 1975–76 through 1982–83, are defined as much warmer than normal (abnormal), and the three consecutive winters, 1976–77 through 1978–79, much colder than normal (abnormal). Abnormal is defined here by the least abnormal of these six winters based on their normalized departures from the mean. When combined, these two abnormal categories have an expected frequency close to 21%. Assuming that the past 89 winters (1895–1984) are a large enough sample to estimate the true interannual temperature variability between winters, we find, using Monte Carlo simulations, that the return period of a series of six winters out of eight being either much above or much below normal is more than 1000 years. This event exceeds the calculated return period of the three consecutive much colder than normal winters (1976–77 through 1978–79) all falling into a much below normal category, i.e., one that is expected to contain approximately 10% of the data. The more moderate winters of 1981–82 and 1983–84 can also be considered abnormal by relaxing the limits necessary for an abnormal classification, but this gives a return period of 467 years for the spell of eight abnormal winters in the nine consecutive winters 1975–76 through 1983–84.

Full access
Robert E. Livezey, Anthony G. Barnston, George V. Gruza, and Esther Ya Ran'kova

Abstract

Analog prediction systems developed in the United States and the former Soviet Union are compared for U.S. seasonal temperature prediction. Of primary interest is the viability of the Russian “optimization” concept for a priori selection of U.S. seasonal analog forecast predictors. Optimization is a specific technique for choosing predictor variables for analog matching on a forecast-by-forecast basis. Validation of this procedure would lead to more efficient design of analog prediction models and the elimination of some subjectivity in the process that inevitably results in overstatements in realizable skill. The procedure's effectiveness was tested using predictor and predictand datasets from the U.S. system in a cross-validation framework. Skills of different models were assessed on the basis of 40 seasonal forecasts at 92 U.S. stations.

The Russian system (called GRAN for “Group Analog”) was first run without optimization using the a posteriors selected predictors used in the U.S. system. A version of the U.S. system (without use of antianalogs) that is conceptually very similar to GRAN without optimization was run for comparison in this calibration step. The results reveal that these systems perform in a nearly identical manner when predictor and predictand datasets are the same. Next GRAN forecasts were made using all available predictors and then using only predictors selected via optimization. The results not only show that objective a priori predictor selection by optimization is just as effective (in terms of skill) as subjective a posteriors selection but also suggest it may produce superior results in summer forecasts.

Full access
Robert E. Livezey, Konstantin Y. Vinnikov, Marina M. Timofeyeva, Richard Tinker, and Huug M. van den Dool

Abstract

WMO-recommended 30-yr normals are no longer generally useful for the design, planning, and decision-making purposes for which they were intended. They not only have little relevance to the future climate, but are often unrepresentative of the current climate. The reason for this is rapid global climate change over the last 30 yr that is likely to continue into the future. It is demonstrated that simple empirical alternatives already are available that not only produce reasonably accurate normals for the current climate but also often justify their extrapolation to several years into the future. This result is tied to the condition that recent trends in the climate are approximately linear or have a substantial linear component. This condition is generally satisfied for the U.S. climate-division data. One alternative [the optimal climate normal (OCN)] is multiyear averages that are not fixed at 30 yr like WMO normals are but rather are adapted climate record by climate record based on easily estimated characteristics of the records. The OCN works well except with very strong trends or longer extrapolations with more moderate trends. In these cases least squares linear trend fits to the period since the mid-1970s are viable alternatives. An even better alternative is the use of “hinge fit” normals, based on modeling the time dependence of large-scale climate change. Here, longer records can be exploited to stabilize estimates of modern trends. Related issues are the need to avoid arbitrary trend fitting and to account for trends in studies of ENSO impacts. Given these results, the authors recommend that (a) the WMO and national climate services address new policies for changing climate normals using the results here as a starting point and (b) NOAA initiate a program for improved estimates and forecasts of official U.S. normals, including operational implementation of a simple hybrid system that combines the advantages of both the OCN and the hinge fit.

Full access
Samuel S. Shen, Thomas M. Smith, Chester F. Ropelewski, and Robert E. Livezey

Abstract

This paper provides a systematic procedure for computing the regional average of climate data in a subregion of the earth surface using the covariance function written in terms of empirical orthogonal functions (EOFs). The method is optimal in the sense of minimum mean square error (mse) and gives an mse estimate of the averaging results. The random measurement error is also included in the total mse. Since the EOFs can account for spatial inhomogeneities, the method can be more accurate than those that assume a homogeneous covariance matrix. This study shows how to further improve the accuracy of optimal averaging (OA) by improving the accuracy of the eigenvalues of the covariance function through an extrapolation method. The accuracy of the authors’ procedure is tested using cross-validation techniques, which simulate past sampling conditions on the recent, well-sampled tropical Pacific SST and use the EOFs independent to the month being tested. The true sampling error of the cross-validated tests is computed with respect to the 1° × 1° data for various sampling conditions. The theoretical sampling error is computed from the authors’ derived formula and compared to the true error from the cross-validation tests. The authors’ numerical results show that (i) the extrapolation method can sometimes improve the accuracy of the eigenvalues by 10%, (ii) the optimal averaging consistently yields smaller mse than the arithmetic averaging, and (iii) the theoretical formula for evaluating the OA error gives estimates that compare well with the true error.

Full access
Thomas M. Smith, Richard W. Reynolds, Robert E. Livezey, and Diane C. Stokes

Abstract

Studies of climate variability often rely on high quality sea surface temperature (SST) anomalies. Although the high-resolution National Centers for Environmental Prediction (formerly the National Meteorological Center) optimum interpolation (OI) SST analysis is satisfactory for these studies, the OI resolution cannot be maintained before November 1931 due to the lack of satellite data. Longer periods of SSTs have come from traditional analyses of in situ (ship and buoy) SST observations alone.

A new interpolation method is developed using spatial patterns from empirical orthogonal functions (E0Fs)—that is, a principal component analysis—to improve analyses of SST anomalies from 1950 to 1981. The method uses the more accurate OI analyses from 1982 to 1993 to produce the spatial EOFs. The dominant EOF modes (which correspond to the largest variance) are used as basis functions and are fit, in a least squares sense, to the in situ data to determine the time dependence of each mode. A complete field of SST anomalies is then reconstructed from these spatial and temporal modes. The use of EOF basis functions produces an improved in situ SST analysis that more realistically represents sparsely sampled, large-scale structures than traditional analyses.

The EOF reconstruction method is developed for the tropical Pacific for the period 1982–92 and compared to the OI. The method is then expanded to the globe and applied to a much longer period, 1950–92. The results show that the reconstructed fields generally have lower rms differences than the traditional in-situ-only analyses relative to the OI. In addition, the reconstructed fields were found to be smoother than the traditional analyses but with enhanced large-scale signals (e.g., ENSO). Regions where traditional analyses are adequate include some parts of the North Atlantic and the North Pacific, where in situ sampling is most dense. Although the shape of SST anomaly patterns can differ greatly between the reconstruction and traditional in situ analysis, area-averaged results from both analyses show similar anomalies.

Full access
Richard J. Reed, Robert M. White, Edward S. Epstein, Richard A. Craig, Harry Hamilton, Robert E. Livezey, David Houghton, and Frederick Carr
Full access
Anthony G. Barnston, Ants Leetmaa, Vernon E. Kousky, Robert E. Livezey, Edward A. O'Lenic, Huug Van den Dool, A. James Wagner, and David A. Unger

The strong El Niño of 1997–98 provided a unique opportunity for National Weather Service, National Centers for Environmental Prediction, Climate Prediction Center (CPC) forecasters to apply several years of accumulated new knowledge of the U.S. impacts of El Niño to their long-lead seasonal forecasts with more clarity and confidence than ever previously. This paper examines the performance of CPC's official forecasts, and its individual component forecast tools, during this event. Heavy winter precipitation across California and the southern plains–Gulf coast region was accurately forecast with at least six months of lead time. Dryness was also correctly forecast in Montana and in the southwestern Ohio Valley. The warmth across the northern half of the country was correctly forecast, but extended farther south and east than predicted. As the winter approached, forecaster confidence in the forecast pattern increased, and the probability anomalies that were assigned reached unprecedented levels in the months immediately preceding the winter. Verification scores for winter 1997/98 forecasts set a new record at CPC for precipitation.

Forecasts for the autumn preceding the El Niño winter were less skillful than those of winter, but skill for temperature was still higher than the average expected for autumn. The precipitation forecasts for autumn showed little skill. Forecasts for the spring following the El Niño were poor, as an unexpected circulation pattern emerged, giving the southern and southeastern United States a significant drought. This pattern, which differed from the historical El Niño pattern for spring, may have been related to a large pool of anomalously warm water that remained in the far eastern tropical Pacific through summer 1998 while the waters in the central Pacific cooled as the El Niño was replaced by a La Niña by the first week of June.

It is suggested that in addition to the obvious effects of the 1997–98 El Niño on 3-month mean climate in the United States, the El Niño (indeed, any strong El Niño or La Niña) may have provided a positive influence on the skill of medium-range forecasts of 5-day mean climate anomalies. This would reflect first the connection between the mean seasonal conditions and the individual contributing synoptic events, but also the possibly unexpected effect of the tropical boundary forcing unique to a given synoptic event. Circumstantial evidence suggests that the skill of medium-range forecasts is increased during lead times (and averaging periods) long enough that the boundary conditions have a noticeable effect, but not so long that the skill associated with the initial conditions disappears. Firmer evidence of a beneficial influence of ENSO on subclimate-scale forecast skill is needed, as the higher skill may be associated just with the higher amplitude of the forecasts, regardless of the reason for that amplitude.

Full access