Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: A. G. Barnston x
  • Refine by Access: All Content x
Clear All Modify Search
E. S. Epstein
and
A. G. Barnston

Abstract

A precipitation climatology has been developed for the relative frequencies of zero, one, or two or more days with measurable precipitation within 5-day periods. In addition, the distribution of precipitation amounts is given for the one wet day in five and for the more than one wet day in five categories. The purpose of the climatology is to provide background for the development and introduction of extended-range (6–10 day forecast period) precipitation forecasts in terms of the probabilities of the three categories.

The climatology is based on 36 years of precipitation data at 146 stations in the contiguous United States. Details of the treatment of the data are provided. Diagrams are developed to display the seasonal patterns of frequency and amount for individual stations. The frequency diagram is a nomogram based on a simple Markov chain model for precipitation occurrences. It can be used to infer—from the frequencies of 0 and exactly 1 wet day in 5—single-day climatological precipitation probabilities or the probabilities conditional on precipitation falling on the previous day (or to infer—from the daily climatology and knowledge of the persistence—the probability of the three categories of the 5-day period). These diagrams are useful (as will be demonstrated by example) for describing and comparing precipitation climatologies. They should also aid the forecaster in making and interpreting probability forecasts of precipitation frequency for the 6–10 day period, where day-by-day forecasts are unfeasible.

Full access
Anthony G. Barnston
,
Yuxiang He
, and
David A. Unger

The prediction of seasonal climate anomalies at useful lead times often involves an unfavorable signal-to-noise ratio. The forecasts, while consequently tending to have modest skill, nonetheless have significant utility when packaged in ways to which users can relate and respond appropriately. This paper presents a reasonable but unprecedented manner in which to issue seasonal climate forecasts and illustrates how implied “tilts of the odds” of the forecasted climate may be used beneficially by technical as well as nontechnical clients.

Full access
L. Goddard
,
A. G. Barnston
, and
S. J. Mason

The International Research Institute for Climate Prediction (IRI) net assessment seasonal temperature and precipitation forecasts are evaluated for the 4-yr period from October–December 1997 to October–December 2001. These probabilistic forecasts represent the human distillation of seasonal climate predictions from various sources. The ranked probability skill score (RPSS) serves as the verification measure. The evaluation is offered as time-averaged spatial maps of the RPSS as well as area-averaged time series. A key element of this evaluation is the examination of the extent to which the consolidation of several predictions, accomplished here subjectively by the forecasters, contributes to or detracts from the forecast skill possible from any individual prediction tool.

Overall, the skills of the net assessment forecasts for both temperature and precipitation are positive throughout the 1997–2001 period. The skill may have been enhanced during the peak of the 1997/98 El Niño, particularly for tropical precipitation, although widespread positive skill exists even at times of weak forcing from the tropical Pacific. The temporally averaged RPSS for the net assessment temperature forecasts appears lower than that for the AGCMs. Over time, however, the IRI forecast skill is more consistently positive than that of the AGCMs. The IRI precipitation forecasts generally have lower skill than the temperature forecasts, but the forecast probabilities for precipitation are found to be appropriate to the frequency of the observed outcomes, and thus reliable. Over many regions where the precipitation variability is known to be potentially predictable, the net assessment precipitation forecasts exhibit more spatially coherent areas of positive skill than most, if not all, prediction tools. On average, the IRI net assessment forecasts appear to perform better than any of the individual objective prediction tools.

Full access
T. M. Smith
,
A. G. Barnston
,
M. Ji
, and
M. Chelliah

Abstract

The value of assimilated subsurface oceanic data to statistical predictions of interannual variability of sea surface temperature (SST) at the National Centers for Environmental Prediction (NCEP) is shown. Subsurface temperature data for the tropical Pacific Ocean come from assimilated ocean analysis from July 1982 to June 1993 and from a numerical model forced by observed surface wind stress from 1961 to June 1982. The value of subsurface oceanic data on the operational NCEP canonical correlation analysis (CCA) forecasts of interannual SST variability is assessed. The CCA is first run using only sea level pressure and SST as predictors, and then the subsurface data are added. It is found that use of the subsurface data improves the forecast for lead times of six months or longer, with some seasonal dependence in the improvements. Forecasts of less than six months are not helped by the subsurface data. Greatest improvements occur for forecasts of boreal winter to spring conditions, with less improvements for the rest of the year.

Full access
J. A. Flueck
,
W. L. Woodley
,
A. G. Barnston
, and
T. J. Brown

Abstract

The Florida Area Cumulus Experiment (FACE) was a two-stage program dedicated to assessing the potential of “dynamic seeding” for enhancing convective rainfall in a fixed target area. FACE-1 (1970–76) was an exploratory cloud seeding experiment that produced substantial indications of a positive treatment effect on rain at the ground, and FACE-2 (1978–80) was a confirmatory experiment that did not confirm the treatment effect results of FACE-1.

This article presents some new analyses of both the FACE-1 and FACE-2 data in an effort to better understand the role of meteorological and treatment factors on rainfall in the days selected for experimentation in Florida. The analyses rely upon a guided exploratory linear modeling of the natural target area rainfall and the potential treatment effects. In particular, a conceptual model of natural Florida rainfall is utilized to guide the construction of the exploratory linear model. After the form of the model is selected it is fitted to both the FACE-1 and the FACE-2 data sets in an attempt to reassess the effects of treatment.

Two approaches are taken to assessing the treatment effects in FACE-1 and in FACF-2: cross-comparison and cross-validation. Both techniques suggest a positive treatment effect in each stage of FACE (i.e., 30–45% in FACE-1 and 10–15% in FACF-2). However, the conventional 0.05 unadjusted statistical level of support is only present in the FACE-1 data. The question of whether FACE-1 results were different from FACE-2 is unresolved. These results continue to emphasize the need to better account for the natural convective precipitation processes in south Florida prior to conducting a cloud seeding project.

Full access
Anthony G. Barnston
,
William L. Woodley
,
John A. Flueck
, and
Michael H. Brown

Abstract

The Florida Area Cumulus Experiment (FACE) is a single area, randomized experiment designed to assess the ground-level rainfall effects of dynamic cloud seeding in summer on the south Florida peninsula. The second phase of FACE (FACE-2), an attempt to confirm the indication of seeding-induced rain increases in FACE-1, has been completed. A description of the FACE-2 program design and how well it was implemented in the summers of 1978, 1979 and 1980 is provided. The data reduction process and its rationale are described both for the basic rainfall data and for the predictor variables to be used in the covariate analyses. The resulting FACE-2 rainfall and covariate data are presented for each of the 61 days of experimentation without knowledge of whether actual seeding (using silver iodide) took place. (Part II will contain the confirmatory and replicated analyses of the effects of seeding, and Part III will present a number of exploratory analyses of the FACE-1 and FACE-2 data.)

Full access
Nicolas Vigaud
,
M. Ting
,
D.-E. Lee
,
A. G. Barnston
, and
Y. Kushnir

Abstract

Six recurrent thermal regimes are identified over continental North America from June to September through a k-means clustering applied to daily maximum temperature simulated by ECHAM5 forced by historical SSTs for 1930–2013 and validated using NCEP–DOE AMIP-II reanalysis over the 1980–2009 period. Four regimes are related to a synoptic wave pattern propagating eastward in the midlatitudes with embedded ridging anomalies that translate into maximum warming transiting along. Two other regimes, associated with broad continental warming and above average temperatures in the northeastern United States, respectively, are characterized by ridging anomalies over North America, Europe, and Asia that suggest correlated heat wave occurrences in these regions. Their frequencies are mainly related to both La Niña and warm conditions in the North Atlantic. Removing all variability beyond the seasonal cycle in the North Atlantic in ECHAM5 leads to a significant drop in the occurrences of the regime associated with warming in the northeastern United States. Superimposing positive (negative) anomalies mimicking the Atlantic multidecadal variability (AMV) in the North Atlantic translates into more (less) warming over the United States across all regimes, and does alter regime frequencies but less significantly. Regime frequency changes are thus primarily controlled by Atlantic SST variability on all time scales beyond the seasonal cycle, rather than mean SST changes, whereas the intensity of temperature anomalies is impacted by AMV SST forcing, because of upper-tropospheric warming and enhanced stability suppressing rising motion during the positive phase of the AMV.

Full access
Anthony G. Barnston
,
Michael K. Tippett
,
Huug M. van den Dool
, and
David A. Unger

Abstract

Since 2002, the International Research Institute for Climate and Society, later in partnership with the Climate Prediction Center, has issued an ENSO prediction product informally called the ENSO prediction plume. Here, measures to improve the reliability and usability of this product are investigated, including bias and amplitude corrections, the multimodel ensembling method, formulation of a probability distribution, and the format of the issued product. Analyses using a subset of the current set of plume models demonstrate the necessity to correct individual models for mean bias and, less urgent, also for amplitude bias, before combining their predictions. The individual ensemble members of all models are weighted equally in combining them to form a multimodel ensemble mean forecast, because apparent model skill differences, when not extreme, are indistinguishable from sampling error when based on a sample of 30 cases or less. This option results in models with larger ensemble numbers being weighted relatively more heavily. Last, a decision is made to use the historical hindcast skill to determine the forecast uncertainty distribution rather than the models’ ensemble spreads, as the spreads may not always reproduce the skill-based uncertainty closely enough to create a probabilistically reliable uncertainty distribution. Thus, the individual model ensemble members are used only for forming the models’ ensemble means and the multimodel forecast mean. In other situations, the multimodel member spread may be used directly. The study also leads to some new formats in which to more effectively show both the mean ENSO prediction and its probability distribution.

Full access
Anthony G. Barnston
,
Ants Leetmaa
,
Vernon E. Kousky
,
Robert E. Livezey
,
Edward A. O'Lenic
,
Huug Van den Dool
,
A. James Wagner
, and
David A. Unger

The strong El Niño of 1997–98 provided a unique opportunity for National Weather Service, National Centers for Environmental Prediction, Climate Prediction Center (CPC) forecasters to apply several years of accumulated new knowledge of the U.S. impacts of El Niño to their long-lead seasonal forecasts with more clarity and confidence than ever previously. This paper examines the performance of CPC's official forecasts, and its individual component forecast tools, during this event. Heavy winter precipitation across California and the southern plains–Gulf coast region was accurately forecast with at least six months of lead time. Dryness was also correctly forecast in Montana and in the southwestern Ohio Valley. The warmth across the northern half of the country was correctly forecast, but extended farther south and east than predicted. As the winter approached, forecaster confidence in the forecast pattern increased, and the probability anomalies that were assigned reached unprecedented levels in the months immediately preceding the winter. Verification scores for winter 1997/98 forecasts set a new record at CPC for precipitation.

Forecasts for the autumn preceding the El Niño winter were less skillful than those of winter, but skill for temperature was still higher than the average expected for autumn. The precipitation forecasts for autumn showed little skill. Forecasts for the spring following the El Niño were poor, as an unexpected circulation pattern emerged, giving the southern and southeastern United States a significant drought. This pattern, which differed from the historical El Niño pattern for spring, may have been related to a large pool of anomalously warm water that remained in the far eastern tropical Pacific through summer 1998 while the waters in the central Pacific cooled as the El Niño was replaced by a La Niña by the first week of June.

It is suggested that in addition to the obvious effects of the 1997–98 El Niño on 3-month mean climate in the United States, the El Niño (indeed, any strong El Niño or La Niña) may have provided a positive influence on the skill of medium-range forecasts of 5-day mean climate anomalies. This would reflect first the connection between the mean seasonal conditions and the individual contributing synoptic events, but also the possibly unexpected effect of the tropical boundary forcing unique to a given synoptic event. Circumstantial evidence suggests that the skill of medium-range forecasts is increased during lead times (and averaging periods) long enough that the boundary conditions have a noticeable effect, but not so long that the skill associated with the initial conditions disappears. Firmer evidence of a beneficial influence of ENSO on subclimate-scale forecast skill is needed, as the higher skill may be associated just with the higher amplitude of the forecasts, regardless of the reason for that amplitude.

Full access
Anthony G. Barnston
,
Huug M. van den Dool
,
Stephen E. Zebiak
,
Tim P. Barnett
,
Ming Ji
,
David R. Rodenhuis
,
Mark A. Cane
,
Ants Leetmaa
,
Nicholas E. Graham
,
Chester R. Ropelewski
,
Vernon E. Kousky
,
Edward A. O'Lenic
, and
Robert E. Livezey

The National Weather Service intends to begin routinely issuing long-lead forecasts of 3-month mean U. S. temperature and precipitation by the beginning of 1995. The ability to produce useful forecasts for certain seasons and regions at projection times of up to 1 yr is attributed to advances in data observing and processing, computer capability, and physical understanding—particularly, for tropical ocean-atmosphere phenomena. Because much of the skill of the forecasts comes from anomalies of tropical SST related to ENSO, we highlight here long-lead forecasts of the tropical Pacific SST itself, which have higher skill than the U.S forecasts that are made largely on their basis.

The performance of five ENSO prediction systems is examined: Two are dynamical [the Cane-Zebiak simple coupled model of Lamont-Doherty Earth Observatory and the nonsimple coupled model of the National Centers for Environmental Prediction (NCEP)]; one is a hybrid coupled model (the Scripps Institution for Oceanography-Max Planck Institute for Meteorology system with a full ocean general circulation model and a statistical atmosphere); and two are statistical (canonical correlation analysis and constructed analogs, used at the Climate Prediction Center of NCEP). With increasing physical understanding, dynamically based forecasts have the potential to become more skillful than purely statistical ones. Currently, however, the two approaches deliver roughly equally skillful forecasts, and the simplest model performs about as well as the more comprehensive models. At a lead time of 6 months (defined here as the time between the end of the latest observed period and the beginning of the predict and period), the SST forecasts have an overall correlation skill in the 0.60s for 1982–93, which easily outperforms persistence and is regarded as useful. Skill for extra-tropical surface climate is this high only in limited regions for certain seasons. Both types of forecasts are not much better than local higher-order autoregressive controls. However, continual progress is being made in understanding relations among global oceanic and atmospheric climate-scale anomaly fields.

It is important that more real-time forecasts be made before we rush to judgement. Performance in the real-time setting is the ultimate test of the utility of a long-lead forecast. The National Weather Service's plan to implement new operational long-lead seasonal forecast products demonstrates its effectiveness in identifying and transferring “cutting edge” technologies from theory to applications. This could not have been accomplished without close ties with, and the active cooperation of, the academic and research communities.

Full access