Search Results

You are looking at 1 - 10 of 10 items for

  • Author or Editor: David A. Unger x
  • Refine by Access: All Content x
Clear All Modify Search
Harry R. Glahn and David A. Unger

Abstract

The Techniques Development Laboratory has a project called the local AFOS MOS Program (LAMP). Its purpose is the development of a system which can produce at any hour of the day in a Weather Service Forecast Office (WSFO) environment Model Output Statistics (MOS) forecasts for essentially all locations for which the WSFO makes routine forecasts. These forecasts will be for most weather elements and for projections of 1 to about 20 hours. Inputs will include centrally produced MOS forecasts, hourly observations, radar data, and a few forecast fields from the National Meteorological Center's primary guidance model.

LAMP includes three rather simple advective models: one to forecast sea level pressure, one to forecast 1000–500 mb moisture and precipitation, and one to forecast sensible weather such as ceiling height and precipitation type. This paper describes LAMP, its three advective models, and the results of experiments in surface wind prediction. It is concluded that LAMP wind forecasts are considerably better than the currently available MOS guidance forecasts for projections of one to several hours and are also considerably better than persistence except for projections of only one or two hours.

Full access
Charles E. Schemm, David A. Unger, and Alan J. Faller

Abstract

A simplified numerical model is used to test new proposals for the statistical correction of numerical predictions. Two procedures are reported here first, time interpolations to allow application of the previously reported STAT method (Faller and Schemm, 1977) when correct data for prediction verification are available only after several time steps of a prediction; and second, the MUST (multi-step statistical) method in which corrections are applied every few time steps, when correct verification data are available.

The interpolations were found to introduce systematic errors in the regression equations, thus rendering the statistical correction procedure useless. The MUST-II procedure (in which individual regression equations are determined for each grid point) was found to give excellent results, computationally stable and accurate for long periods and superior in every way to all previously tried methods.

Application of MUST-I, in which a single regression equation is used for all grid points, to the NMC Barotropic-Mesh model gave poor results, probably because of the large spatial variability of the atmospheric dynamics associated with geographical factors.

Full access
Frank Lewis, David A. Unger, and Joseph R. Bocchieri

Abstract

The relationship between precipitable water, W; 1000–500 mb thickness, h; station elevation, E, and observed precipitation was examined to obtain an equation to estimate saturation thickness. Radiosonde observations were categorized by values of W, h, and E, and a value for saturation thickness, h3, was determined for each precipitable water category and station elevation group on the basis of the precipitation frequency. A regression equation was then developed that relates h3 to InW and E.

Regression equations were then developed to relate InW to surface observations and the 12-h forecast of W from the LFM model to enable estimation of the saturation thickness at any hour. About 91% of the variance in InW explained by the natural logarithm of the LFM precipitable water forecast. An additional 2–4% was explained by the surface dew point observations. No other variable added significantly to the relationship. An equation relating InW to surface observations was derived to be used in the event the LFM forecast of InW is not available.

Full access
Anthony G. Barnston, Yuxiang He, and David A. Unger

The prediction of seasonal climate anomalies at useful lead times often involves an unfavorable signal-to-noise ratio. The forecasts, while consequently tending to have modest skill, nonetheless have significant utility when packaged in ways to which users can relate and respond appropriately. This paper presents a reasonable but unprecedented manner in which to issue seasonal climate forecasts and illustrates how implied “tilts of the odds” of the forecasted climate may be used beneficially by technical as well as nontechnical clients.

Full access
Edward A. O’Lenic, David A. Unger, Michael S. Halpert, and Kenneth S. Pelman
Full access
Edward A. O’Lenic, David A. Unger, Michael S. Halpert, and Kenneth S. Pelman

Abstract

The science, production methods, and format of long-range forecasts (LRFs) at the Climate Prediction Center (CPC), a part of the National Weather Service’s (NWS’s) National Centers for Environmental Prediction (NCEP), have evolved greatly since the inception of 1-month mean forecasts in 1946 and 3-month mean forecasts in 1982. Early forecasts used a subjective blending of persistence and linear regression-based forecast tools, and a categorical map format. The current forecast system uses an increasingly objective technique to combine a variety of statistical and dynamical models, which incorporate the impacts of El Niño–Southern Oscillation (ENSO) and other sources of interannual variability, and trend. CPC’s operational LRFs are produced each midmonth with a “lead” (i.e., amount of time between the release of a forecast and the start of the valid period) of ½ month for the 1-month outlook, and with leads ranging from ½ month through 12½ months for the 3-month outlook. The 1-month outlook is also updated at the end of each month with a lead of zero. Graphical renderings of the forecasts made available to users range from a simple display of the probability of the most likely tercile to a detailed portrayal of the entire probability distribution.

Efforts are under way at CPC to objectively weight, bias correct, and combine the information from many different LRF prediction tools into a single tool, called the consolidation (CON). CON ½-month lead 3-month temperature (precipitation) hindcasts over 1995–2005 were 18% (195%) better, as measured by the Heidke skill score for nonequal chances forecasts, than real-time official (OFF) forecasts during that period. CON was implemented into LRF operations in 2006, and promises to transfer these improvements to the official LRF.

Improvements in the science and production methods of LRFs are increasingly being driven by users, who are finding an increasing number of applications, and demanding improved access to forecast information. From the forecast-producer side, hope for improvement in this area lies in greater dialogue with users, and development of products emphasizing user access, input, and feedback, including direct access to 5 km × 5 km gridded outlook data through NWS’s new National Digital Forecast Database (NDFD).

Full access
David A. Unger, Huug van den Dool, Edward O’Lenic, and Dan Collins

Abstract

A regression model was developed for use with ensemble forecasts. Ensemble members are assumed to represent a set of equally likely solutions, one of which will best fit the observation. If standard linear regression assumptions apply to the best member, then a regression relationship can be derived between the full ensemble and the observation without explicitly identifying the best member for each case. The ensemble regression equation is equivalent to linear regression between the ensemble mean and the observation, but is applied to each member of the ensemble. The “best member” error variance is defined in terms of the correlation between the ensemble mean and the observations, their respective variances, and the ensemble spread. A probability density function representing the ensemble prediction is obtained from the normalized sum of the best-member error distribution applied to the regression forecast from each ensemble member. Ensemble regression was applied to National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) forecasts of seasonal mean Niño-3.4 SSTs on historical forecasts for the years 1981–2005. The skill of the ensemble regression was about the same as that of the linear regression on the ensemble mean when measured by the continuous ranked probability score (CRPS), and both methods produced reliable probabilities. The CFS spread appears slightly too high for its skill, and the CRPS of the CFS predictions can be slightly improved by reducing its ensemble spread to about 0.8 of its original value prior to regression calibration.

Full access
Anthony G. Barnston, Michael K. Tippett, Huug M. van den Dool, and David A. Unger

Abstract

Since 2002, the International Research Institute for Climate and Society, later in partnership with the Climate Prediction Center, has issued an ENSO prediction product informally called the ENSO prediction plume. Here, measures to improve the reliability and usability of this product are investigated, including bias and amplitude corrections, the multimodel ensembling method, formulation of a probability distribution, and the format of the issued product. Analyses using a subset of the current set of plume models demonstrate the necessity to correct individual models for mean bias and, less urgent, also for amplitude bias, before combining their predictions. The individual ensemble members of all models are weighted equally in combining them to form a multimodel ensemble mean forecast, because apparent model skill differences, when not extreme, are indistinguishable from sampling error when based on a sample of 30 cases or less. This option results in models with larger ensemble numbers being weighted relatively more heavily. Last, a decision is made to use the historical hindcast skill to determine the forecast uncertainty distribution rather than the models’ ensemble spreads, as the spreads may not always reproduce the skill-based uncertainty closely enough to create a probabilistically reliable uncertainty distribution. Thus, the individual model ensemble members are used only for forming the models’ ensemble means and the multimodel forecast mean. In other situations, the multimodel member spread may be used directly. The study also leads to some new formats in which to more effectively show both the mean ENSO prediction and its probability distribution.

Full access
Anthony J. Schreiner, David A. Unger, W. Paul Menzel, Gary P. Ellrod, Kathy I. Strabala, and Jackson L. Pellet

A processing scheme that determines cloud height and amount based on radiances from the Visible Infrared Spin Scan Radiometer Atmospheric Sounder (VAS) using a CO2 absorption technique has been installed on the National Environmental Satellite Data and Information Service VAS Data Utilization Center computer system in Washington, D.C. The processed data will complement the Automated Surface Observing System (ASOS). ASOS uses automated ground equipment that provides near-continuous observations of surface weather data that are currently manually obtained. Geostationary multispectral infrared measurements are available every hour with information on clouds above the ASOS laser ceilometer viewing limit of 12 000 ft. The combined ASOS/satellite system will be able to depict cloud conditions at all levels up to 50 000 ft. The error rate of combined ASOS and satellite observations is less than 4% of the total sample in a comparison test with manual observations performed by National Weather Service personnel during March and April 1992. An attempt to distinguish thin from opaque clouds, by using a satellite-determined effective cloud amount, resulted in a substantial reduction in the discrepancies.

Full access
Anthony G. Barnston, Ants Leetmaa, Vernon E. Kousky, Robert E. Livezey, Edward A. O'Lenic, Huug Van den Dool, A. James Wagner, and David A. Unger

The strong El Niño of 1997–98 provided a unique opportunity for National Weather Service, National Centers for Environmental Prediction, Climate Prediction Center (CPC) forecasters to apply several years of accumulated new knowledge of the U.S. impacts of El Niño to their long-lead seasonal forecasts with more clarity and confidence than ever previously. This paper examines the performance of CPC's official forecasts, and its individual component forecast tools, during this event. Heavy winter precipitation across California and the southern plains–Gulf coast region was accurately forecast with at least six months of lead time. Dryness was also correctly forecast in Montana and in the southwestern Ohio Valley. The warmth across the northern half of the country was correctly forecast, but extended farther south and east than predicted. As the winter approached, forecaster confidence in the forecast pattern increased, and the probability anomalies that were assigned reached unprecedented levels in the months immediately preceding the winter. Verification scores for winter 1997/98 forecasts set a new record at CPC for precipitation.

Forecasts for the autumn preceding the El Niño winter were less skillful than those of winter, but skill for temperature was still higher than the average expected for autumn. The precipitation forecasts for autumn showed little skill. Forecasts for the spring following the El Niño were poor, as an unexpected circulation pattern emerged, giving the southern and southeastern United States a significant drought. This pattern, which differed from the historical El Niño pattern for spring, may have been related to a large pool of anomalously warm water that remained in the far eastern tropical Pacific through summer 1998 while the waters in the central Pacific cooled as the El Niño was replaced by a La Niña by the first week of June.

It is suggested that in addition to the obvious effects of the 1997–98 El Niño on 3-month mean climate in the United States, the El Niño (indeed, any strong El Niño or La Niña) may have provided a positive influence on the skill of medium-range forecasts of 5-day mean climate anomalies. This would reflect first the connection between the mean seasonal conditions and the individual contributing synoptic events, but also the possibly unexpected effect of the tropical boundary forcing unique to a given synoptic event. Circumstantial evidence suggests that the skill of medium-range forecasts is increased during lead times (and averaging periods) long enough that the boundary conditions have a noticeable effect, but not so long that the skill associated with the initial conditions disappears. Firmer evidence of a beneficial influence of ENSO on subclimate-scale forecast skill is needed, as the higher skill may be associated just with the higher amplitude of the forecasts, regardless of the reason for that amplitude.

Full access