Search Results
You are looking at 1 - 10 of 33 items for
- Author or Editor: Richard W. Katz x
- Refine by Access: All Content x
Abstract
Kraus (1977) has demonstrated that subtropical African droughts exhibit statistically significant persistence. It is emphasized, through a further analysis of annual subtropical African rainfall, that the data are highly variable with only a small degree of persistence. These results have significant implications concerning the appropriate characterization of the likelihood of drought for dissemination to decision-makers.
Abstract
Kraus (1977) has demonstrated that subtropical African droughts exhibit statistically significant persistence. It is emphasized, through a further analysis of annual subtropical African rainfall, that the data are highly variable with only a small degree of persistence. These results have significant implications concerning the appropriate characterization of the likelihood of drought for dissemination to decision-makers.
Abstract
A probabilistic model for the sequence of daily amounts of precipitation is proposed. This model is a generalization of the commonly used Markov chain model for the occurrence of precipitation. Methods are given for computing the distribution of the maximum amount of daily precipitation and the distribution of the total amount of precipitation. The application of this model is illustrated by an example, using State College, Pennsylvania, precipitation data.
Abstract
A probabilistic model for the sequence of daily amounts of precipitation is proposed. This model is a generalization of the commonly used Markov chain model for the occurrence of precipitation. Methods are given for computing the distribution of the maximum amount of daily precipitation and the distribution of the total amount of precipitation. The application of this model is illustrated by an example, using State College, Pennsylvania, precipitation data.
Abstract
Abstract
Abstract
A compound Poisson process is proposed as a stochastic model for the total economic damage associated with hurricanes. This model consists of two components, one governing the occurrence of events and another specifying the damages associated with individual events. In this way, damage totals are represented as a “random sum,” with variations in total damage being decomposed into two sources, one attributable to variations in the frequency of events and another to variations in the damage from individual events. The model is applied to the economic damage, adjusted for societal vulnerability, caused by North Atlantic hurricanes making landfall in the continental United States. The total number of damaging storms per year is fitted reasonably well by a Poisson distribution, and the monetary damage for individual storms is fitted by the lognormal. The fraction of the variation in annual damage totals associated with fluctuations in the number of storms, although smaller than the corresponding fraction for individual storm damage, is nonnegligible. No evidence is present for a trend in the rate parameter of the Poisson process for the occurrence of storms, and only weak evidence for a trend in the mean of the log-transformed damage from individual storms is present. Stronger evidence exists for dependence of these parameters, both occurrence and storm damage, on the state of El Niño.
Abstract
A compound Poisson process is proposed as a stochastic model for the total economic damage associated with hurricanes. This model consists of two components, one governing the occurrence of events and another specifying the damages associated with individual events. In this way, damage totals are represented as a “random sum,” with variations in total damage being decomposed into two sources, one attributable to variations in the frequency of events and another to variations in the damage from individual events. The model is applied to the economic damage, adjusted for societal vulnerability, caused by North Atlantic hurricanes making landfall in the continental United States. The total number of damaging storms per year is fitted reasonably well by a Poisson distribution, and the monetary damage for individual storms is fitted by the lognormal. The fraction of the variation in annual damage totals associated with fluctuations in the number of storms, although smaller than the corresponding fraction for individual storm damage, is nonnegligible. No evidence is present for a trend in the rate parameter of the Poisson process for the occurrence of storms, and only weak evidence for a trend in the mean of the log-transformed damage from individual storms is present. Stronger evidence exists for dependence of these parameters, both occurrence and storm damage, on the state of El Niño.
Abstract
A statistical methodology is presented for making inferences about changes in mean daily precipitation from the results of general circulation model (GCM) climate experiments. A specialized approach is required because precipitation is inherently a discontinuous process. The proposed procedure is based upon a probabilistic model that simultaneously represents both occurrence and intensity components of the precipitation process, with the occurrence process allowed to be correlated in time and the intensifies allowed to have a non-Gaussian distribution. In addition to establishing whether the difference between experiment and control daily means is statistically significant, the procedure provides confidence intervals for the ratio of experiment to control median daily precipitation intensities and for the difference between experiment and control probabilities of daily precipitation occurrence. The technique is applied to the comparison of winter and summer precipitation data generated in a control integration of the Oregon State University atmospheric GCM.
Abstract
A statistical methodology is presented for making inferences about changes in mean daily precipitation from the results of general circulation model (GCM) climate experiments. A specialized approach is required because precipitation is inherently a discontinuous process. The proposed procedure is based upon a probabilistic model that simultaneously represents both occurrence and intensity components of the precipitation process, with the occurrence process allowed to be correlated in time and the intensifies allowed to have a non-Gaussian distribution. In addition to establishing whether the difference between experiment and control daily means is statistically significant, the procedure provides confidence intervals for the ratio of experiment to control median daily precipitation intensities and for the difference between experiment and control probabilities of daily precipitation occurrence. The technique is applied to the comparison of winter and summer precipitation data generated in a control integration of the Oregon State University atmospheric GCM.
Abstract
A procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented. A parametric time series modeling approach is taken, yielding a potentially mere powerful technique for detecting climatic change than the simpler schemes used heretofore. The application of this procedure is demonstrated through the use of GCM control data to estimate the variance of winter and summer time averages of daily mean surface air temperature. The test application provides estimates of the magnitude of climatic change that the procedure should be able to detect. A related result of the analysis is that autoregressive processes of higher than first order are needed to adequately model the majority of the GCM time series considered.
Abstract
A procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented. A parametric time series modeling approach is taken, yielding a potentially mere powerful technique for detecting climatic change than the simpler schemes used heretofore. The application of this procedure is demonstrated through the use of GCM control data to estimate the variance of winter and summer time averages of daily mean surface air temperature. The test application provides estimates of the magnitude of climatic change that the procedure should be able to detect. A related result of the analysis is that autoregressive processes of higher than first order are needed to adequately model the majority of the GCM time series considered.
Abstract
A dynamic decision-making problem is considered involving the use of information about the autocorrelation of a climate variable. Specifically, an infinite horizon, discounted version of the dynamic cost-loss ratio model is treated, in which only two states of weather ("adverse” or “not adverse") are possible and only two actions are permitted ("protect” or “do not protect"). To account for the temporal dependence of the sequence of states of the occurrence (or nonoccurrence) of adverse weather, a Markov chain model is employed. It is shown that knowledge of this autocorrelation has potential economic value to a decision maker, even without any genuine forecasts being available. Numerical examples are presented to demonstrate that a decision maker who erroneously follows a suboptimal strategy based on the belief that the climate variable is temporally independent could incur unnecessary expense. This approach also provides a natural framework for extension to the situation in which forecasts are available for an autocorrelated climate variable.
Abstract
A dynamic decision-making problem is considered involving the use of information about the autocorrelation of a climate variable. Specifically, an infinite horizon, discounted version of the dynamic cost-loss ratio model is treated, in which only two states of weather ("adverse” or “not adverse") are possible and only two actions are permitted ("protect” or “do not protect"). To account for the temporal dependence of the sequence of states of the occurrence (or nonoccurrence) of adverse weather, a Markov chain model is employed. It is shown that knowledge of this autocorrelation has potential economic value to a decision maker, even without any genuine forecasts being available. Numerical examples are presented to demonstrate that a decision maker who erroneously follows a suboptimal strategy based on the belief that the climate variable is temporally independent could incur unnecessary expense. This approach also provides a natural framework for extension to the situation in which forecasts are available for an autocorrelated climate variable.
Abstract
A statistical procedure is described for making inferences about changes in climate variability. The fundamental question of how to define climate variability is first addressed, and a definition of intrinsic climate variability based on a “prewhitening” of the data is advocated. A test for changes in variability that is not sensitive to departures from the assumption of a Gaussian distribution for the data is outlined. In addition to establishing whether observed differences in variability are statistically significant, the procedure provides confidence intervals for the ratio of variability. The technique is applied to time series of daily mean surface air temperature generated by the Oregon State University atmospheric general circulation model. The test application provides estimates of the magnitude of change in variability that the procedure should be likely to detect.
Abstract
A statistical procedure is described for making inferences about changes in climate variability. The fundamental question of how to define climate variability is first addressed, and a definition of intrinsic climate variability based on a “prewhitening” of the data is advocated. A test for changes in variability that is not sensitive to departures from the assumption of a Gaussian distribution for the data is outlined. In addition to establishing whether observed differences in variability are statistically significant, the procedure provides confidence intervals for the ratio of variability. The technique is applied to time series of daily mean surface air temperature generated by the Oregon State University atmospheric general circulation model. The test application provides estimates of the magnitude of change in variability that the procedure should be likely to detect.
Abstract
Statistical problems that may be encountered in fitting autoregressive-moving average (ARMA) processes to meteorological time series are described. Techniques that lead to an increased likelihood of choosing the most appropriate ARMA process to model the data at hand are emphasized. One specific meteorological application of ARMA processes, the modeling of Palmer Drought Index time series for climatic divisions of the United States is considered in detail. It is shown that low-order purely autoregressive processes adequately fit these data.
Abstract
Statistical problems that may be encountered in fitting autoregressive-moving average (ARMA) processes to meteorological time series are described. Techniques that lead to an increased likelihood of choosing the most appropriate ARMA process to model the data at hand are emphasized. One specific meteorological application of ARMA processes, the modeling of Palmer Drought Index time series for climatic divisions of the United States is considered in detail. It is shown that low-order purely autoregressive processes adequately fit these data.
Abstract
Many climatic applications, including detection of climate change, require temperature time series that are free from discontinuities introduced by nonclimatic events such as relocation of weather stations. Although much attention has been devoted to discontinuities in the mean, possible changes in the variance have not been considered. A method is proposed to test and possibly adjust for nonclimatic inhomogeneities in the variance of temperature time series. The method is somewhat analogous to that developed by Karl and Williams to adjust for nonclimatic inhomogeneities in the mean. It uses the nonparametric bootstrap technique to compute confidence intervals for the discontinuity in variance. The method is tested on 1901–88 summer and winter mean maximum temperature data from 21 weather stations in the midwestern United States. The reasonableness, reliability, and accuracy of the estimated changes in variance are evaluated.
The bootstrap technique is found to be a valuable tool for obtaining confidence limits on the proposed variance adjustment. Inhomogeneities in variance are found to be more frequent than would be expected by chance in the summer temperature data, indicating that variance inhomogeneity is indeed a problem. Precision of the estimates in the test data indicates that changes of about 25%–30% in standard deviation can be detected if sufficient data are available. However, estimates of the changes in the standard deviation may be unreliable when less than 10 years of data are available before or after a potential discontinuity. This statistical test can be a useful tool for screening out stations that have unacceptably large discontinuities in variance.
Abstract
Many climatic applications, including detection of climate change, require temperature time series that are free from discontinuities introduced by nonclimatic events such as relocation of weather stations. Although much attention has been devoted to discontinuities in the mean, possible changes in the variance have not been considered. A method is proposed to test and possibly adjust for nonclimatic inhomogeneities in the variance of temperature time series. The method is somewhat analogous to that developed by Karl and Williams to adjust for nonclimatic inhomogeneities in the mean. It uses the nonparametric bootstrap technique to compute confidence intervals for the discontinuity in variance. The method is tested on 1901–88 summer and winter mean maximum temperature data from 21 weather stations in the midwestern United States. The reasonableness, reliability, and accuracy of the estimated changes in variance are evaluated.
The bootstrap technique is found to be a valuable tool for obtaining confidence limits on the proposed variance adjustment. Inhomogeneities in variance are found to be more frequent than would be expected by chance in the summer temperature data, indicating that variance inhomogeneity is indeed a problem. Precision of the estimates in the test data indicates that changes of about 25%–30% in standard deviation can be detected if sufficient data are available. However, estimates of the changes in the standard deviation may be unreliable when less than 10 years of data are available before or after a potential discontinuity. This statistical test can be a useful tool for screening out stations that have unacceptably large discontinuities in variance.