Search Results

You are looking at 1 - 8 of 8 items for :

  • Author or Editor: Zoltan Toth x
  • Weather and Forecasting x
  • All content x
Clear All Modify Search
Huug M. Van Den Dool and Zoltan Toth

Abstract

It has been observed by many that skill of categorical forecasts, when decomposed into the contributions from each category separately, tends to be low, if not absent or negative, in the “near normal” (N) category. We have witnessed many discussions as to why it is so difficult to forecast near normal weather, without a satisfactory explanation ever having reached the literature. After presenting some fresh examples, we try to explain this remarkable fact from a number of statistical considerations and from the various definitions of skill. This involves definitions of rms error and skill that are specific for a given anomaly amplitude. There is low skill in the N-class of a 3-category forecast system because a) our forecast methods tend to have an rms error that depends little on forecast amplitude, while the width of the categories for predictands with a near Gaussian distribution is very narrow near the center, and b) it is easier, for the verifying observation, to ‘escape’ from the closed N-class (2-sided escape chance) than from the open ended outer classes. At a different level of explanation, there is lack of skill near the mean because in the definition of skill we compare the method in need of verification to random forecasts as the reference. The latter happens to perform, in the rms sense, best near the mean. Lack of skill near the mean is not restricted to categorical forecasts or to any specific lead time.

Rather than recommending a solution, we caution against the over-interpretation of the notion of skill-by-class. It appears that low skill near the mean is largely a matter of definition and may therefore not require a physical-dynamical explanation. We note that the whole problem is gone when one replaces the random reference forecast by persistence.

We finally note that low skill near the mean has had an element of applying the notion forecasting forecast skill in practice long before it was deduced that we were making a forecast of that skill. We show analytically that as long as the forecast anomaly amplitude is small relative to the forecast rms error, one has to expect the anomaly correlation to increase linearly with forecast magnitude. This has been found empirically by Tracton et al. (1989).

Full access
Zoltan Toth, Yuejian Zhu, and Timothy Marchok

Abstract

In the past decade ensemble forecasting has developed into an integral part of numerical weather prediction. Flow-dependent forecast probability distributions can be readily generated from an ensemble, allowing for the identification of forecast cases with high and low uncertainty. The ability of the NCEP ensemble to distinguish between high and low uncertainty forecast cases is studied here quantitatively. Ensemble mode forecasts, along with traditional higher-resolution control forecasts, are verified in terms of predicting the probability of the true state being in 1 of 10 climatologically equally likely 500-hPa height intervals. A stratification of the forecast cases by the degree of overall agreement among the ensemble members reveals great differences in forecast performance between the cases identified by the ensemble as the least and most uncertain. A new ensemble-based forecast product, the “relative measure of predictability,” is introduced to identify forecasts with below and above average uncertainty. This measure is standardized according to geographical location, the phase of the annual cycle, lead time, and also the position of the forecast value in terms of the climatological frequency distribution. The potential benefits of using this and other ensemble-based measures of predictability is demonstrated through synoptic examples.

Full access
Joel K. Sivillo, Jon E. Ahlquist, and Zoltan Toth

Abstract

An ensemble forecast is a collection (an ensemble) of forecasts that all verify at the same time. These forecasts are regarded as possible scenarios given the uncertainty associated with forecasting. With such an ensemble, one can address issues that go beyond simply estimating the best forecast. These include estimation of the probability of various events and estimation of the confidence that can be associated with a forecast.

Global ensemble forecasts out to 10 days have been computed at both the U.S. and European central forecasting centers since December 1992. Since 1995, the United States has computed experimental regional ensemble forecasts focusing on smaller-scale forecast uncertainties out to 2 days.

The authors address challenges associated with ensemble forecasting such as 1) formulating an ensemble, 2) choosing the number of forecasts in an ensemble, 3) extracting information from an ensemble of forecasts, 4) displaying information from an ensemble of forecasts, and 5) interpreting ensemble forecasts. Two synoptic- scale examples of ensemble forecasting from the winter of 1995/96 are also shown.

Full access
Warren J. Tennant, Zoltan Toth, and Kevin J. Rae

Abstract

The National Centers for Environmental Prediction (NCEP) Ensemble Forecasting System (EFS) is used operationally in South Africa for medium-range forecasts up to 14 days ahead. The use of model-generated probability forecasts has a clear benefit in the skill of the 1–7-day forecasts. This is seen in the forecast probability distribution being more successful in spanning the observed space than a single deterministic forecast and, thus, substantially reducing the instances of missed events in the forecast. In addition, the probability forecasts generated using the EFS are particularly useful in estimating confidence in forecasts. During the second week of the forecast the EFS is used as a heads-up for possible synoptic-scale events and also for predicting average weather conditions and probability density distributions of some elements such as maximum temperature and wind. This paper assesses the medium-range forecast process and the application of the NCEP EFS at the South African Weather Service. It includes a description of the various medium-range products, adaptive bias-correction methods applied to the forecasts, verification of the forecast products, and a discussion on the various challenges that face researchers and forecasters alike.

Full access
Bo Cui, Zoltan Toth, Yuejian Zhu, and Dingchen Hou

Abstract

The main task of this study is to introduce a statistical postprocessing algorithm to reduce the bias in the National Centers for Environmental Prediction (NCEP) and Meteorological Service of Canada (MSC) ensemble forecasts before they are merged to form a joint ensemble within the North American Ensemble Forecast System (NAEFS). This statistical postprocessing method applies a Kalman filter type algorithm to accumulate the decaying averaging bias and produces bias-corrected ensembles for 35 variables. NCEP implemented this bias-correction technique in 2006. NAEFS is a joint operational multimodel ensemble forecast system that combines NCEP and MSC ensemble forecasts after bias correction. According to operational statistical verification, both the NCEP and MSC bias-corrected ensemble forecast products are enhanced significantly. In addition to the operational calibration technique, three other experiments were designed to assess and mitigate ensemble biases on the model grid: a decaying averaging bias calibration method with short samples, a climate mean bias calibration method, and a bias calibration method using dependent data. Preliminary results show that the decaying averaging method works well for the first few days. After removing the decaying averaging bias, the calibrated NCEP operational ensemble has improved probabilistic performance for all measures until day 5. The reforecast ensembles from the Earth System Research Laboratory’s Physical Sciences Division with and without the climate mean bias correction were also examined. A comparison between the operational and the bias-corrected reforecast ensembles shows that the climate mean bias correction can add value, especially for week-2 probability forecasts.

Full access
Jie Feng, Jing Zhang, Zoltan Toth, Malaquias Peña, and Sai Ravela

Abstract

Ensemble prediction is a widely used tool in weather forecasting. In particular, the arithmetic mean (AM) of ensemble members is used to filter out unpredictable features from a forecast. AM is a pointwise statistical concept, providing the best sample-based estimate of the expected value of any single variable. The atmosphere, however, is a multivariate system with spatially coherent features characterized with strong correlations. Disregarding such correlations, the AM of an ensemble of forecasts removes not only unpredictable noise but also flattens features whose presence is still predictable, albeit with somewhat uncertain location. As a consequence, AM destroys the structure, and reduces the amplitude and variability associated with partially predictable features. Here we explore the use of an alternative concept of central tendency for the estimation of the expected feature (instead of single values) in atmospheric systems. Features that are coherent across ensemble members are first collocated to their mean position, before the AM of the aligned members is taken. Unlike earlier definitions based on complex variational minimization (field coalescence of Ravela and generalized ensemble mean of Purser), the proposed feature-oriented mean (FM) uses simple and computationally efficient vector operations. Though FM is still not a dynamically realizable state, a preliminary evaluation of ensemble geopotential height forecasts indicates that it retains more variance than AM, without a noticeable drop in skill. Beyond ensemble forecasting, possible future applications include a wide array of climate studies where the collocation of larger-scale features of interest may yield enhanced compositing results.

Full access
Zoltan Toth, Eugenia Kalnay, Steven M. Tracton, Richard Wobus, and Joseph Irwin

Abstract

Ensemble forecasting has been operational at NCEP (formerly the National Meteorological Center) since December 1992. In March 1994, more ensemble forecast members were added. In the new configuration, 17 forecasts with the NCEP global model are run every day, out to 16-day lead time. Beyond the 3 control forecasts (a T126 and a T62 resolution control at 0000 UTC and a T126 control at 1200 UTC), 14 perturbed forecasts are made at the reduced T62 resolution. Global products from the ensemble forecasts are available from NCEP via anonymous FTP.

The initial perturbation vectors are derived from seven independent breeding cycles, where the fast-growing nonlinear perturbations grow freely, apart from the periodic rescaling that keeps their magnitude compatible with the estimated uncertainty within the control analysis. The breeding process is an integral part of the extended-range forecasts, and the generation of the initial perturbations for the ensemble is done at no computational cost beyond that of running the forecasts.

A number of graphical forecast products derived from the ensemble are available to the users, including forecasters at the Hydrometeorological Prediction Center and the Climate Prediction Center of NCEP. The products include the ensemble and cluster means, standard deviations, and probabilities of different events. One of the most widely used products is the “spaghetti” diagram where a single map contains all 17 ensemble forecasts, as depicted by a selected contour level of a field, for example, 5520 m at 500-hPa height or 50 m s−1 windspeed at the jet level.

With the aid of the above graphical displays and also by objective verification, the authors have established that the ensemble can provide valuable information for both the short and extended range. In particular, the ensemble can indicate potential problems with the high-resolution control that occurs on rare occasions in the short range. Most of the time, the “cloud” of the ensemble encompasses the verification, thus providing a set of alternate possible scenarios beyond that of the control. Moreover, the ensemble provides a more consistent outlook for the future. While consecutive control forecasts verifying on a particular date may often display large “jumps” from one day to the next, the ensemble changes much less, and its envelope of solutions typically remains unchanged. In addition, the ensemble extends the practical limit of weather forecasting by about a day. For example, significant new weather systems (blocking, extratropical cyclones, etc.) are usually detected by some ensemble members a day earlier than by the high-resolution control. Similarly, the ensemble mean improves forecast skill by a day or more in the medium to extended range, with respect to the skill of the control. The ensemble is also useful in pointing out areas and times where the spread within the ensemble is high and consequently low skill can be expected and, conversely, those cases in which forecasters can make a confident extended-range forecast because the low ensemble spread indicates high predictability. Another possible application of the ensemble is identifying potential model errors. A case of low ensemble spread with all forecasts verifying poorly may be an indication of model bias. The advantage of the ensemble approach is that it can potentially indicate a systematic bias even for a single case, while studies using only a control forecast need to average many cases.

Full access
Zhao-Xia Pu, Eugenia Kalnay, David Parrish, Wanshu Wu, and Zoltan Toth

Abstract

The errors in the first-guess (forecast field) of an analysis system vary from day to day, but, as in all current operational data assimilation systems, forecast error covariances are assumed to be constant in time in the NCEP operational three-dimensional variational analysis system (known as a spectral statistical interpolation or SSI). This study focuses on the impact of modifying the error statistics by including effects of the “errors of the day” on the analysis system. An estimate of forecast uncertainty, as defined from the bred growing vectors of the NCEP operational global ensemble forecast, is applied in the NCEP operational SSI analysis system. The growing vectors are used to estimate the spatially and temporally varying degree of uncertainty in the first-guess forecasts used in the analysis. The measure of uncertainty is defined by a ratio of the local amplitude of the growing vectors, relative to a background amplitude measure over a large area. This ratio is used in the SSI system for adjusting the observational error term (giving more weight to observations in regions of larger forecast errors). Preliminary experiments with the low-resolution global system show positive impact of this virtually cost-free method on the quality of the analysis and medium-range weather forecasts, encouraging further tests for operational use. The results of a 45-day parallel run, and a discussion of other methods to take advantage of the knowledge of the day-to-day variation in forecast uncertainties provided by the NCEP ensemble forecast system, are also presented in the paper.

Full access