Search Results

You are looking at 1 - 10 of 33 items for

  • Author or Editor: Richard W. Katz x
  • All content x
Clear All Modify Search
Richard W. Katz

Abstract

Kraus (1977) has demonstrated that subtropical African droughts exhibit statistically significant persistence. It is emphasized, through a further analysis of annual subtropical African rainfall, that the data are highly variable with only a small degree of persistence. These results have significant implications concerning the appropriate characterization of the likelihood of drought for dissemination to decision-makers.

Full access
Richard W. Katz

Abstract

A dynamic decision-making problem is considered involving the use of information about the autocorrelation of a climate variable. Specifically, an infinite horizon, discounted version of the dynamic cost-loss ratio model is treated, in which only two states of weather ("adverse” or “not adverse") are possible and only two actions are permitted ("protect” or “do not protect"). To account for the temporal dependence of the sequence of states of the occurrence (or nonoccurrence) of adverse weather, a Markov chain model is employed. It is shown that knowledge of this autocorrelation has potential economic value to a decision maker, even without any genuine forecasts being available. Numerical examples are presented to demonstrate that a decision maker who erroneously follows a suboptimal strategy based on the belief that the climate variable is temporally independent could incur unnecessary expense. This approach also provides a natural framework for extension to the situation in which forecasts are available for an autocorrelated climate variable.

Full access
Richard W. Katz

Abstract

A procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented. A parametric time series modeling approach is taken, yielding a potentially mere powerful technique for detecting climatic change than the simpler schemes used heretofore. The application of this procedure is demonstrated through the use of GCM control data to estimate the variance of winter and summer time averages of daily mean surface air temperature. The test application provides estimates of the magnitude of climatic change that the procedure should be able to detect. A related result of the analysis is that autoregressive processes of higher than first order are needed to adequately model the majority of the GCM time series considered.

Full access
Richard W. Katz

Abstract

A probabilistic model for the sequence of daily amounts of precipitation is proposed. This model is a generalization of the commonly used Markov chain model for the occurrence of precipitation. Methods are given for computing the distribution of the maximum amount of daily precipitation and the distribution of the total amount of precipitation. The application of this model is illustrated by an example, using State College, Pennsylvania, precipitation data.

Full access
Richard W. Katz

Abstract

Full access
Richard W. Katz

Abstract

A statistical procedure is described for making inferences about changes in climate variability. The fundamental question of how to define climate variability is first addressed, and a definition of intrinsic climate variability based on a “prewhitening” of the data is advocated. A test for changes in variability that is not sensitive to departures from the assumption of a Gaussian distribution for the data is outlined. In addition to establishing whether observed differences in variability are statistically significant, the procedure provides confidence intervals for the ratio of variability. The technique is applied to time series of daily mean surface air temperature generated by the Oregon State University atmospheric general circulation model. The test application provides estimates of the magnitude of change in variability that the procedure should be likely to detect.

Full access
Richard W. Katz

Abstract

A compound Poisson process is proposed as a stochastic model for the total economic damage associated with hurricanes. This model consists of two components, one governing the occurrence of events and another specifying the damages associated with individual events. In this way, damage totals are represented as a “random sum,” with variations in total damage being decomposed into two sources, one attributable to variations in the frequency of events and another to variations in the damage from individual events. The model is applied to the economic damage, adjusted for societal vulnerability, caused by North Atlantic hurricanes making landfall in the continental United States. The total number of damaging storms per year is fitted reasonably well by a Poisson distribution, and the monetary damage for individual storms is fitted by the lognormal. The fraction of the variation in annual damage totals associated with fluctuations in the number of storms, although smaller than the corresponding fraction for individual storm damage, is nonnegligible. No evidence is present for a trend in the rate parameter of the Poisson process for the occurrence of storms, and only weak evidence for a trend in the mean of the log-transformed damage from individual storms is present. Stronger evidence exists for dependence of these parameters, both occurrence and storm damage, on the state of El Niño.

Full access
Richard W. Katz

Abstract

A statistical methodology is presented for making inferences about changes in mean daily precipitation from the results of general circulation model (GCM) climate experiments. A specialized approach is required because precipitation is inherently a discontinuous process. The proposed procedure is based upon a probabilistic model that simultaneously represents both occurrence and intensity components of the precipitation process, with the occurrence process allowed to be correlated in time and the intensifies allowed to have a non-Gaussian distribution. In addition to establishing whether the difference between experiment and control daily means is statistically significant, the procedure provides confidence intervals for the ratio of experiment to control median daily precipitation intensities and for the difference between experiment and control probabilities of daily precipitation occurrence. The technique is applied to the comparison of winter and summer precipitation data generated in a control integration of the Oregon State University atmospheric GCM.

Full access
Richard W. Katz and Martin Ehrendorfer

Abstract

The economic value of ensemble-based weather or climate forecasts is generally assessed by taking the ensembles at “face value.” That is, the forecast probability is estimated as the relative frequency of occurrence of an event among a limited number of ensemble members. Despite the economic value of probability forecasts being based on the concept of decision making under uncertainty, in effect, the decision maker is assumed to ignore the uncertainty in estimating this probability. Nevertheless, many users are certainly aware of the uncertainty inherent in a limited ensemble size. Bayesian prediction is used instead in this paper, incorporating such additional forecast uncertainty into the decision process. The face-value forecast probability estimator would correspond to a Bayesian analysis, with a prior distribution on the actual forecast probability only being appropriate if it were believed that the ensemble prediction system produces perfect forecasts. For the cost–loss decision-making model, the economic value of the face-value estimator can be negative for small ensemble sizes from a prediction system with a level of skill that is not sufficiently high. Further, this economic value has the counterintuitive property of sometimes decreasing as the ensemble size increases. For a more plausible form of prior distribution on the actual forecast probability, which could be viewed as a “recalibration” of face-value forecasts, the Bayesian estimator does not exhibit this unexpected behavior. Moreover, it is established that the effects of ensemble size on the reliability, skill, and economic value have been exaggerated by using the face-value, instead of the Bayesian, estimator.

Full access
Richard W. Katz and Xiaogu Zheng

Abstract

Stochastic models fit to time series of daily precipitation amount generally ignore any year-to-year (i.e., low frequency) source of random variation, and such models are known to underestimate the interannual variance of monthly or seasonal total precipitation. To explicitly account for this “overdispersion” phenomenon, a mixture model is proposed. A hidden index, taking on one of two possible states, is assumed to exist (perhaps representing different modes of atmospheric circulation). To represent the intermittency of precipitation and the tendency of wet or dry spells to persist, a stochastic model known as a chain-dependent process is applied. The parameters of this stochastic model are permitted to vary conditionally on the hidden index.

Data for one location in California (whose previous study motivated the present approach), as well as for another location in New Zealand, are analyzed. To estimate the parameters of a mixture of two conditional chain-dependent processes by maximum likelihood, the “expectation-maximization algorithm” is employed. It is demonstrated that this approach can either eliminate or greatly reduce the extent of the overdispersion phenomenon. Moreover, an attempt is made to relate the hidden indexes to observed features of atmospheric circulation. This approach to dealing with overdispersion is contrasted with the more prevalent alternative of fitting more complex stochastic models for high-frequency variations to time series of daily precipitation.

Full access