Search Results

You are looking at 1 - 9 of 9 items for

  • Author or Editor: F. Anthony Eckel x
  • Refine by Access: All Content x
Clear All Modify Search
F. Anthony Eckel
and
Clifford F. Mass

Abstract

This study developed and evaluated a short-range ensemble forecasting (SREF) system with the goal of producing useful, mesoscale forecast probability (FP). Real-time, 0–48-h SREF predictions were produced and analyzed for 129 cases over the Pacific Northwest. Eight analyses from different operational forecast centers were used as initial conditions for running the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model (MM5).

Model error is a large source of forecast uncertainty and must be accounted for to maximize SREF utility, particularly for mesoscale, sensible weather phenomena. Although inclusion of model diversity improved FP skill (both reliability and resolution) and increased dispersion toward statistical consistency, dispersion remained inadequate. Conversely, systematic model errors (i.e., biases) must be removed from an SREF since they contribute to forecast error but not to forecast uncertainty. A grid-based, 2-week, running-mean bias correction was shown to improve FP skill through 1) better reliability by adjusting the ensemble mean toward the mean of the verifying analysis, and 2) better resolution by removing unrepresentative ensemble variance.

Comparison of the multimodel (each member uses a unique model) and varied-model (each member uses a unique version of MM5) approaches indicated that the multimodel SREF exhibited greater dispersion and superior performance. It was also found that an ensemble of unequally likely members can be skillful as long as each member occasionally performs well. Finally, smaller grid spacing led to greater ensemble spread as smaller scales of motion were modeled. This study indicates substantial utility in current SREF systems and suggests several avenues for further improvement.

Full access
F. Anthony Eckel
and
Luca Delle Monache

Abstract

An analog ensemble (AnEn) is constructed by first matching up the current forecast from a numerical weather prediction (NWP) model with similar past forecasts. The verifying observation from each match is then used as an ensemble member. For at least some applications, the advantages of AnEn over an NWP ensemble (multiple real-time model runs) may include higher efficiency, avoidance of initial condition and model perturbation challenges, and little or no need for postprocessing calibration. While AnEn can capture flow-dependent error growth, it may miss aspects of error growth that can be represented dynamically by the multiple real-time model runs of an NWP ensemble. To combine the strengths of the AnEn and NWP ensemble approaches, a hybrid ensemble (HyEn) is constructed by finding m analogs for each member of a small n-member NWP ensemble, to produce a total of m × n members.

Forecast skill is compared between the AnEn, HyEn, and an NWP ensemble calibrated using logistic regression. The HyEn outperforms the other approaches for probabilistic 2-m temperature forecasts yet underperforms for 10-m wind speed. The mixed results reveal a dependence on the intrinsic skill of the NWP members employed. In this study, the NWP ensemble is underspread for both 2-m temperature and 10-m winds, yet displays some ability to represent flow-dependent error for the former and not the latter. Thus, the HyEn is a promising approach for efficient generation of high-quality probabilistic forecasts, but requires use of a small, and at least partially functional, NWP ensemble.

Full access
F. Anthony Eckel
and
Michael K. Walters

Abstract

Probabilistic quantitative precipitation forecasts (PQPFs) based on the National Centers for Environmental Prediction Medium-Range Forecast (MRF) ensemble currently perform below their full potential quality (i.e., accuracy and reliability). This unfulfilled potential is due to the MRF ensemble being adversely affected by systematic errors that arise from an imperfect forecast model and less than optimum ensemble initial perturbations. This research sought to construct a calibration to account for these systematic errors and thus produce higher quality PQPFs.

The main tool of the calibration was the verification rank histogram, which can be used to interpret and adjust an ensemble forecast. Using a large training dataset, many histograms were created, each characterized by a different forecast lead time and level of ensemble variability. These results were processed into probability surfaces, providing detailed information on performance of the ensemble as part of the calibration scheme.

Improvement of the calibrated PQPF over the current uncalibrated PQPF was examined using a separate, large forecasting dataset, with climatological PQPF as the baseline. While the calibration technique noticeably improved the quality of PQPF and extended predictability by about 1 day, its usefulness was bounded by the intrinsic predictability limits of cumulative precipitation. Predictability was found to be dependent upon the precipitation category. For significant levels of precipitation (high thresholds), the calibration designed in this research was found to be useful only for short-range PQPFs. For low precipitation thresholds, the calibrated PQPF did prove to be of value in the medium range.

Full access
Mark S. Allen
and
F. Anthony Eckel

Abstract

This study explores the objective application of ambiguity information, that is, the uncertainty in forecast probability derived from an ensemble. One application approach, called uncertainty folding, merges ambiguity with forecast uncertainty information for subsequent use in standard risk-analysis decision making. Uncertainty folding is found to be of no practical benefit when tested in a low-order, weather forecast simulation. A second approach, called ulterior motives, attempts to use ambiguity information to aid secondary decision factors not considered in the standard risk analysis, while simultaneously maintaining the primary value associated with the probabilistic forecasts. Following ulterior motives, the practical utility of ambiguity information is demonstrated on real-world ensemble forecasts used to support decisions concerning the preparation for freezing temperatures paired with a secondary desire for the reduction in repeat false alarms. Sample products for communicating ambiguity to the user are also presented.

Full access
Mark S. Allen
and
F. Anthony Eckel
Full access
F. Anthony Eckel
,
Mark S. Allen
, and
Matthew C. Sittel

Abstract

Ambiguity is uncertainty in the prediction of forecast uncertainty, or in the forecast probability of a specific event, associated with random error in an ensemble forecast probability density function. In ensemble forecasting ambiguity arises from finite sampling and deficient simulation of the various sources of forecast uncertainty. This study introduces two practical methods of estimating ambiguity and demonstrates them on 5-day, 2-m temperature forecasts from the Japan Meteorological Agency’s Ensemble Prediction System. The first method uses the error characteristics of the calibrated ensemble as well as the ensemble spread to predict likely errors in forecast probability. The second method applies bootstrap resampling on the ensemble members to produce multiple likely values of forecast probability. Both methods include forecast calibration since ambiguity results from random and not systematic errors, which must be removed to reveal the ambiguity. Additionally, use of a more robust calibration technique (improving beyond just correcting average errors) is shown to reduce ambiguity. Validation using a low-order dynamical system reveals that both estimation methods have deficiencies but exhibit some skill, making them candidates for application to decision making—the subject of a companion paper.

Full access
F. Anthony Eckel
,
Mark S. Allen
, and
Matthew C. Sittel
Full access
Luca Delle Monache
,
F. Anthony Eckel
,
Daran L. Rife
,
Badrinath Nagarajan
, and
Keith Searight

Abstract

This study explores an analog-based method to generate an ensemble [analog ensemble (AnEn)] in which the probability distribution of the future state of the atmosphere is estimated with a set of past observations that correspond to the best analogs of a deterministic numerical weather prediction (NWP). An analog for a given location and forecast lead time is defined as a past prediction, from the same model, that has similar values for selected features of the current model forecast. The AnEn is evaluated for 0–48-h probabilistic predictions of 10-m wind speed and 2-m temperature over the contiguous United States and against observations provided by 550 surface stations, over the 23 April–31 July 2011 period. The AnEn is generated from the Environment Canada (EC) deterministic Global Environmental Multiscale (GEM) model and a 12–15-month-long training period of forecasts and observations. The skill and value of AnEn predictions are compared with forecasts from a state-of-the-science NWP ensemble system, the 21-member Regional Ensemble Prediction System (REPS). The AnEn exhibits high statistical consistency and reliability and the ability to capture the flow-dependent behavior of errors, and it has equal or superior skill and value compared to forecasts generated via logistic regression (LR) applied to both the deterministic GEM (as in AnEn) and REPS [ensemble model output statistics (EMOS)]. The real-time computational cost of AnEn and LR is lower than EMOS.

Full access
Edward I. Tollerud
,
Brian Etherton
,
Zoltan Toth
,
Isidora Jankov
,
Tara L. Jensen
,
Huiling Yuan
,
Linda S. Wharton
,
Paula T. McCaslin
,
Eugene Mirvis
,
Bill Kuo
,
Barbara G. Brown
,
Louisa Nance
,
Steven E. Koch
, and
F. Anthony Eckel
Full access