Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: Edward A. O'Lenic x
  • Refine by Access: All Content x
Clear All Modify Search
Edward A. O'Lenic

Abstract

The impact of VMR Atmospheric Sounder (VAS) temperature profiles upon the Limited-Area Finemesh Model used operationally at the National Meteorological Center (NMC) is investigated for a set of six cases during the winter or 1981/82. Pairs of analyses are prepared for each case, one using both VAS and conventional data (VAS analyses) and one using only conventional data (control analyses). VISSR Atmospheric Sounder data, inserted over the northeast Pacific Ocean, generally effected moderate changes in the 30–60 range, upon 500 mb height analyses. A comparison of one of the VAS analyses with a concurrent LFM analysis prepared using NOAA-7 data shows that the two observing systems compare favorably, implying that the VAS data for that case were at least as accurate as the NOAA-7 data in depicting the features of a complex man field. Six sets of VAS and control forecasts were prepared using the analyses described herein as initial data for the LFM. Four of the six forecasts made using VAS data improved in comparison with their control counterparts.

Full access
Edward A. O'Lenic

Abstract

No abstract available.

Full access
Edward A. O'Lenic and Robert E. Livezey

Abstract

The relationship between the existence of low-frequency 700 mb height anomalies in the initial conditions of NMC's MRF global spectral model and subsequent 5-, 7-, and 10-day forecasts of 700 mb height from 1982 to 1988 is explored. Low-frequency 700 mb flow regimes are specified in each of four two-month seasons by performing a rotated principal component analysis (RPCA) on 38 or 39 year time series of daily, low-pass filtered 700 mb height analyses. In a given season, the amplitude time series (ATS) for each mode is used to decide which MRF forecast error maps should be used in forming a composite map corresponding to either the “+” phase or the “−” phase of the given mode. Several methods, including Monte Carlo simulations, are used to evaluate the statistical significance of the composite maps.

Many modes, including the Pacific North American (PNA) mode in winter and the leading summer mode, are found to be related to either unusually strong or unusually weak systematic error signature. Two different modes, one in spring and one in autumn, corresponding to quasi-stationary patterns over the United States and the North Atlantic, respectively, are related to unusually strong forecast error signatures. A statistically significant number of such modes is found in each of the four seasons, with the number of such results being smallest in autumn, and 1argest in spring. The results also indicate that the MRF model response to the presence of low-frequency regimes in the initial conditions is such that composite error signatures have a component with opposite phase and amplitude for opposite phases of a given mode (linear response). The overall results demonstrate the feasibility of using this technique to identify mode-linked forecast error signatures, and provides a potential opportunity to correct forecasts in the MRF, and possibly in other models, by removing the appropriate systematic error signatures.

Full access
Edward A. O'Lenic and Robert E. Livezey

Abstract

Rotated principal component analysis (RPCA) is a powerful tool for studying upper air height data because of its ability to distill information about the variance existing in a large number of maps to a much smaller set of physically meaningful maps which together explain a large fraction of the variance of she input dataset. However, in order to achieve this, one faces the problem of deciding how many eigenmodes to rotate. A discussion of the dangers of incorrectly choosing the rotation point and a quasi-objective technique that leads to a good compromise between over- and underrotation are presented. Finally, the use of RPCA for detecting errors and inconsistencies in upper air data along with two examples is discussed.

Full access
Kenneth H. Bergman and Edward A. O'Lenic

Abstract

No abstract available.

Full access
Edward A. O’Lenic, David A. Unger, Michael S. Halpert, and Kenneth S. Pelman
Full access
Edward A. O’Lenic, David A. Unger, Michael S. Halpert, and Kenneth S. Pelman

Abstract

The science, production methods, and format of long-range forecasts (LRFs) at the Climate Prediction Center (CPC), a part of the National Weather Service’s (NWS’s) National Centers for Environmental Prediction (NCEP), have evolved greatly since the inception of 1-month mean forecasts in 1946 and 3-month mean forecasts in 1982. Early forecasts used a subjective blending of persistence and linear regression-based forecast tools, and a categorical map format. The current forecast system uses an increasingly objective technique to combine a variety of statistical and dynamical models, which incorporate the impacts of El Niño–Southern Oscillation (ENSO) and other sources of interannual variability, and trend. CPC’s operational LRFs are produced each midmonth with a “lead” (i.e., amount of time between the release of a forecast and the start of the valid period) of ½ month for the 1-month outlook, and with leads ranging from ½ month through 12½ months for the 3-month outlook. The 1-month outlook is also updated at the end of each month with a lead of zero. Graphical renderings of the forecasts made available to users range from a simple display of the probability of the most likely tercile to a detailed portrayal of the entire probability distribution.

Efforts are under way at CPC to objectively weight, bias correct, and combine the information from many different LRF prediction tools into a single tool, called the consolidation (CON). CON ½-month lead 3-month temperature (precipitation) hindcasts over 1995–2005 were 18% (195%) better, as measured by the Heidke skill score for nonequal chances forecasts, than real-time official (OFF) forecasts during that period. CON was implemented into LRF operations in 2006, and promises to transfer these improvements to the official LRF.

Improvements in the science and production methods of LRFs are increasingly being driven by users, who are finding an increasing number of applications, and demanding improved access to forecast information. From the forecast-producer side, hope for improvement in this area lies in greater dialogue with users, and development of products emphasizing user access, input, and feedback, including direct access to 5 km × 5 km gridded outlook data through NWS’s new National Digital Forecast Database (NDFD).

Full access
Huug M. van den Dool, Edward A. O'Lenic, and William H. Klein

Abstract

A time series of 43 years of observed monthly mean air temperature at 109 sites in the 48 contiguous United States is compared to monthly mean air temperature specified from hemispheric gridded 700-mb heights. Because both upper-air and surface data have problems that may limit their use in climate change studies, this comparison could be considered a mutual consistency check. Cooling (by about 0.5°C) from 1951 to about 1970 and subsequent warming (also by 0.5°C) that continues through the present are found in both datasets, indicating that these interdecadal changes are probably real.

In the List several years the specified temperatures were often colder than those observed. This prompted an investigation of whether the “residual” (specified minus observed) has recently been large (and negative) compared to the earlier part of the record. It was found that for the same 700-mb height field, surface temperatures were almost a degree Celsius warmer in the last few years than they were in the early 1950s, but considering the variability of the residuals over the 1950–92 period, the recent cold residuals may not yet be strikingly unusual.

By comparing the full set of 109 stations to a “clean” subset of 24, the impact of common problems in surface data (station relocation, urbanization, etc.) was found to be quite small. The rather favorable comparison of observed surface temperatures and specified surface temperatures (which suffer from upper-air analysis / observation changes over the years) indicates that their respective data problems do not appear to invalidate their use in studies of interdecadal temperature change.

Full access
David A. Unger, Huug van den Dool, Edward O’Lenic, and Dan Collins

Abstract

A regression model was developed for use with ensemble forecasts. Ensemble members are assumed to represent a set of equally likely solutions, one of which will best fit the observation. If standard linear regression assumptions apply to the best member, then a regression relationship can be derived between the full ensemble and the observation without explicitly identifying the best member for each case. The ensemble regression equation is equivalent to linear regression between the ensemble mean and the observation, but is applied to each member of the ensemble. The “best member” error variance is defined in terms of the correlation between the ensemble mean and the observations, their respective variances, and the ensemble spread. A probability density function representing the ensemble prediction is obtained from the normalized sum of the best-member error distribution applied to the regression forecast from each ensemble member. Ensemble regression was applied to National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) forecasts of seasonal mean Niño-3.4 SSTs on historical forecasts for the years 1981–2005. The skill of the ensemble regression was about the same as that of the linear regression on the ensemble mean when measured by the continuous ranked probability score (CRPS), and both methods produced reliable probabilities. The CFS spread appears slightly too high for its skill, and the CRPS of the CFS predictions can be slightly improved by reducing its ensemble spread to about 0.8 of its original value prior to regression calibration.

Full access
Anthony G. Barnston, Ants Leetmaa, Vernon E. Kousky, Robert E. Livezey, Edward A. O'Lenic, Huug Van den Dool, A. James Wagner, and David A. Unger

The strong El Niño of 1997–98 provided a unique opportunity for National Weather Service, National Centers for Environmental Prediction, Climate Prediction Center (CPC) forecasters to apply several years of accumulated new knowledge of the U.S. impacts of El Niño to their long-lead seasonal forecasts with more clarity and confidence than ever previously. This paper examines the performance of CPC's official forecasts, and its individual component forecast tools, during this event. Heavy winter precipitation across California and the southern plains–Gulf coast region was accurately forecast with at least six months of lead time. Dryness was also correctly forecast in Montana and in the southwestern Ohio Valley. The warmth across the northern half of the country was correctly forecast, but extended farther south and east than predicted. As the winter approached, forecaster confidence in the forecast pattern increased, and the probability anomalies that were assigned reached unprecedented levels in the months immediately preceding the winter. Verification scores for winter 1997/98 forecasts set a new record at CPC for precipitation.

Forecasts for the autumn preceding the El Niño winter were less skillful than those of winter, but skill for temperature was still higher than the average expected for autumn. The precipitation forecasts for autumn showed little skill. Forecasts for the spring following the El Niño were poor, as an unexpected circulation pattern emerged, giving the southern and southeastern United States a significant drought. This pattern, which differed from the historical El Niño pattern for spring, may have been related to a large pool of anomalously warm water that remained in the far eastern tropical Pacific through summer 1998 while the waters in the central Pacific cooled as the El Niño was replaced by a La Niña by the first week of June.

It is suggested that in addition to the obvious effects of the 1997–98 El Niño on 3-month mean climate in the United States, the El Niño (indeed, any strong El Niño or La Niña) may have provided a positive influence on the skill of medium-range forecasts of 5-day mean climate anomalies. This would reflect first the connection between the mean seasonal conditions and the individual contributing synoptic events, but also the possibly unexpected effect of the tropical boundary forcing unique to a given synoptic event. Circumstantial evidence suggests that the skill of medium-range forecasts is increased during lead times (and averaging periods) long enough that the boundary conditions have a noticeable effect, but not so long that the skill associated with the initial conditions disappears. Firmer evidence of a beneficial influence of ENSO on subclimate-scale forecast skill is needed, as the higher skill may be associated just with the higher amplitude of the forecasts, regardless of the reason for that amplitude.

Full access