Search Results

You are looking at 11 - 14 of 14 items for

  • Author or Editor: Roman Krzysztofowicz x
  • Refine by Access: All Content x
Clear All Modify Search
Roman Krzysztofowicz and W. Britt Evans

Abstract

A sequence of meteorological predictands of one kind (e.g., temperature) forms a discrete-time, continuous-state stochastic process, which typically is nonstationary and periodic (because of seasonality). Three contributions to the field of probabilistic forecasting of such processes are reported. First, a meta-Gaussian Markov model of the stochastic process is formulated, which provides a climatic probabilistic forecast with the lead time of l days in the form of a (prior) l-step transition distribution function. A measure of the temporal dependence of the process is the autocorrelation coefficient (which is nonstationary). Second, a Bayesian processor of forecast (BPF) is formulated, which fuses the climatic probabilistic forecast with an operational deterministic forecast produced by any system (e.g., a numerical weather prediction model, a human forecaster, a statistical postprocessor). A measure of the predictive performance of the system is the informativeness score (which may be nonstationary). The BPF outputs a probabilistic forecast in the form of a (posterior) l-step transition distribution function, which quantifies the uncertainty about the predictand that remains, given the antecedent observation and the deterministic forecast. The working of the Markov BPF is explained on probabilistic forecasts obtained from the official deterministic forecasts of the daily maximum temperature issued by the U.S. National Weather Service with the lead times of 1, 4, and 7 days. Third, a numerical experiment demonstrates how the degree of posterior uncertainty varies with the informativeness of the deterministic forecast and the autocorrelation of the predictand series. It is concluded that, depending upon the level of informativeness, the Markov BPF is a contender for operational implementation when a rank autocorrelation coefficient is between 0.3 and 0.6, and is the preferred processor when a rank autocorrelation coefficient exceeds 0.6. Thus, the climatic autocorrelation can play a significant role in quantifying, and ultimately in reducing, the meteorological forecast uncertainty.

Full access
Roman Krzysztofowicz and W. Britt Evans

Abstract

The Bayesian processor of forecast (BPF) is developed for a continuous predictand. Its purpose is to process a deterministic forecast (a point estimate of the predictand) into a probabilistic forecast (a distribution function, a density function, and a quantile function). The quantification of uncertainty is accomplished via Bayes theorem by extracting and fusing two kinds of information from two different sources: (i) a long sample of the predictand from the National Climatic Data Center, and (ii) a short sample of the official National Weather Service forecast from the National Digital Forecast Database. The official forecast is deterministic and hence deficient: it contains no information about uncertainty. The BPF remedies this deficiency by outputting the complete and well-calibrated characterization of uncertainty needed by decision makers and information providers. The BPF comes furnished with (i) the meta-Gaussian model, which fits meteorological data well as it allows all forms of marginal distribution functions, and nonlinear and heteroscedastic dependence structures, and (ii) the statistical procedures for estimation of parameters from asymmetric samples and for coping with nonstationarities in the predictand and the forecast due to the annual cycle and the lead time. A comprehensive illustration of the BPF is reported for forecasts of the daily maximum temperature issued with lead times of 1, 4, and 7 days for three stations in two seasons (cool and warm).

Full access
Roman Krzysztofowicz, William J. Drzal, Theresa Rossi Drake, James C. Weyman, and Louis A. Giordano

Abstract

A methodology has been formulated to aid a field forecaster in preparing probabilistic quantitative precipitation forecasts (QPFs) for river basins. The format of probabilistic QPF is designed to meet three requirements: (i) it is compatible with the forecaster's judgmental process, which involves meteorologic inference and probabilistic reasoning; (ii) it can be input directly into a hydrologic model that produces river stage forecasts (at present); and (iii) it provides information sufficient for producing probabilistic river stage forecasts (in the future).

The methodology, implemented as a human–computer system, has been tested operationally on two river basins by the Weather Service Forecast Office in Pittsburgh, Pennsylvania, since August 1990. The article elaborates on the rationale behind methods being proposed, details system components, recommends an information processing scheme for judgmental probabilistic forecasting, and outlines training, testing, and verification programs.

Full access
Dingchen Hou, Mike charles, Yan Luo, Zoltan Toth, Yuejian Zhu, Roman Krzysztofowicz, Ying Lin, Pingping Xie, Dong-Jun Seo, Malaquias Pena, and Bo Cui

Abstract

Two widely used precipitation analyses are the Climate Prediction Center (CPC) unified global daily gauge analysis and Stage IV analysis based on quantitative precipitation estimate with multisensor observations. The former is based on gauge records with a uniform quality control across the entire domain and thus bears more confidence, but provides only 24-h accumulation at ⅛° resolution. The Stage IV dataset, on the other hand, has higher spatial and temporal resolution, but is subject to different methods of quality control and adjustments by different River Forecasting Centers. This article describes a methodology used to generate a new dataset by adjusting the Stage IV 6-h accumulations based on available joint samples of the two analyses to take advantage of both datasets. A simple linear regression model is applied to the archived historical Stage IV and the CPC datasets after the former is aggregated to the CPC grid and daily accumulation. The aggregated Stage IV analysis is then adjusted based on this linear model and then downscaled back to its original resolution. The new dataset, named Climatology-Calibrated Precipitation Analysis (CCPA), retains the spatial and temporal patterns of the Stage IV analysis while having its long-term average and climate probability distribution closer to that of the CPC analysis. The limitation of the methodology at some locations is mainly associated with heavy to extreme precipitation events, which the Stage IV dataset tends to underestimate. CCPA cannot effectively correct this because of the linear regression model and the relative scarcity of heavy precipitation in the training data sample.

Full access