Search Results
You are looking at 1 - 10 of 27 items for :
- Author or Editor: Richard W. Katz x
- Article x
- Refine by Access: All Content x
Abstract
A probabilistic model for the sequence of daily amounts of precipitation is proposed. This model is a generalization of the commonly used Markov chain model for the occurrence of precipitation. Methods are given for computing the distribution of the maximum amount of daily precipitation and the distribution of the total amount of precipitation. The application of this model is illustrated by an example, using State College, Pennsylvania, precipitation data.
Abstract
A probabilistic model for the sequence of daily amounts of precipitation is proposed. This model is a generalization of the commonly used Markov chain model for the occurrence of precipitation. Methods are given for computing the distribution of the maximum amount of daily precipitation and the distribution of the total amount of precipitation. The application of this model is illustrated by an example, using State College, Pennsylvania, precipitation data.
Abstract
A compound Poisson process is proposed as a stochastic model for the total economic damage associated with hurricanes. This model consists of two components, one governing the occurrence of events and another specifying the damages associated with individual events. In this way, damage totals are represented as a “random sum,” with variations in total damage being decomposed into two sources, one attributable to variations in the frequency of events and another to variations in the damage from individual events. The model is applied to the economic damage, adjusted for societal vulnerability, caused by North Atlantic hurricanes making landfall in the continental United States. The total number of damaging storms per year is fitted reasonably well by a Poisson distribution, and the monetary damage for individual storms is fitted by the lognormal. The fraction of the variation in annual damage totals associated with fluctuations in the number of storms, although smaller than the corresponding fraction for individual storm damage, is nonnegligible. No evidence is present for a trend in the rate parameter of the Poisson process for the occurrence of storms, and only weak evidence for a trend in the mean of the log-transformed damage from individual storms is present. Stronger evidence exists for dependence of these parameters, both occurrence and storm damage, on the state of El Niño.
Abstract
A compound Poisson process is proposed as a stochastic model for the total economic damage associated with hurricanes. This model consists of two components, one governing the occurrence of events and another specifying the damages associated with individual events. In this way, damage totals are represented as a “random sum,” with variations in total damage being decomposed into two sources, one attributable to variations in the frequency of events and another to variations in the damage from individual events. The model is applied to the economic damage, adjusted for societal vulnerability, caused by North Atlantic hurricanes making landfall in the continental United States. The total number of damaging storms per year is fitted reasonably well by a Poisson distribution, and the monetary damage for individual storms is fitted by the lognormal. The fraction of the variation in annual damage totals associated with fluctuations in the number of storms, although smaller than the corresponding fraction for individual storm damage, is nonnegligible. No evidence is present for a trend in the rate parameter of the Poisson process for the occurrence of storms, and only weak evidence for a trend in the mean of the log-transformed damage from individual storms is present. Stronger evidence exists for dependence of these parameters, both occurrence and storm damage, on the state of El Niño.
Abstract
A statistical methodology is presented for making inferences about changes in mean daily precipitation from the results of general circulation model (GCM) climate experiments. A specialized approach is required because precipitation is inherently a discontinuous process. The proposed procedure is based upon a probabilistic model that simultaneously represents both occurrence and intensity components of the precipitation process, with the occurrence process allowed to be correlated in time and the intensifies allowed to have a non-Gaussian distribution. In addition to establishing whether the difference between experiment and control daily means is statistically significant, the procedure provides confidence intervals for the ratio of experiment to control median daily precipitation intensities and for the difference between experiment and control probabilities of daily precipitation occurrence. The technique is applied to the comparison of winter and summer precipitation data generated in a control integration of the Oregon State University atmospheric GCM.
Abstract
A statistical methodology is presented for making inferences about changes in mean daily precipitation from the results of general circulation model (GCM) climate experiments. A specialized approach is required because precipitation is inherently a discontinuous process. The proposed procedure is based upon a probabilistic model that simultaneously represents both occurrence and intensity components of the precipitation process, with the occurrence process allowed to be correlated in time and the intensifies allowed to have a non-Gaussian distribution. In addition to establishing whether the difference between experiment and control daily means is statistically significant, the procedure provides confidence intervals for the ratio of experiment to control median daily precipitation intensities and for the difference between experiment and control probabilities of daily precipitation occurrence. The technique is applied to the comparison of winter and summer precipitation data generated in a control integration of the Oregon State University atmospheric GCM.
Abstract
A procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented. A parametric time series modeling approach is taken, yielding a potentially mere powerful technique for detecting climatic change than the simpler schemes used heretofore. The application of this procedure is demonstrated through the use of GCM control data to estimate the variance of winter and summer time averages of daily mean surface air temperature. The test application provides estimates of the magnitude of climatic change that the procedure should be able to detect. A related result of the analysis is that autoregressive processes of higher than first order are needed to adequately model the majority of the GCM time series considered.
Abstract
A procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented. A parametric time series modeling approach is taken, yielding a potentially mere powerful technique for detecting climatic change than the simpler schemes used heretofore. The application of this procedure is demonstrated through the use of GCM control data to estimate the variance of winter and summer time averages of daily mean surface air temperature. The test application provides estimates of the magnitude of climatic change that the procedure should be able to detect. A related result of the analysis is that autoregressive processes of higher than first order are needed to adequately model the majority of the GCM time series considered.
Abstract
A dynamic decision-making problem is considered involving the use of information about the autocorrelation of a climate variable. Specifically, an infinite horizon, discounted version of the dynamic cost-loss ratio model is treated, in which only two states of weather ("adverse” or “not adverse") are possible and only two actions are permitted ("protect” or “do not protect"). To account for the temporal dependence of the sequence of states of the occurrence (or nonoccurrence) of adverse weather, a Markov chain model is employed. It is shown that knowledge of this autocorrelation has potential economic value to a decision maker, even without any genuine forecasts being available. Numerical examples are presented to demonstrate that a decision maker who erroneously follows a suboptimal strategy based on the belief that the climate variable is temporally independent could incur unnecessary expense. This approach also provides a natural framework for extension to the situation in which forecasts are available for an autocorrelated climate variable.
Abstract
A dynamic decision-making problem is considered involving the use of information about the autocorrelation of a climate variable. Specifically, an infinite horizon, discounted version of the dynamic cost-loss ratio model is treated, in which only two states of weather ("adverse” or “not adverse") are possible and only two actions are permitted ("protect” or “do not protect"). To account for the temporal dependence of the sequence of states of the occurrence (or nonoccurrence) of adverse weather, a Markov chain model is employed. It is shown that knowledge of this autocorrelation has potential economic value to a decision maker, even without any genuine forecasts being available. Numerical examples are presented to demonstrate that a decision maker who erroneously follows a suboptimal strategy based on the belief that the climate variable is temporally independent could incur unnecessary expense. This approach also provides a natural framework for extension to the situation in which forecasts are available for an autocorrelated climate variable.
Abstract
A statistical procedure is described for making inferences about changes in climate variability. The fundamental question of how to define climate variability is first addressed, and a definition of intrinsic climate variability based on a “prewhitening” of the data is advocated. A test for changes in variability that is not sensitive to departures from the assumption of a Gaussian distribution for the data is outlined. In addition to establishing whether observed differences in variability are statistically significant, the procedure provides confidence intervals for the ratio of variability. The technique is applied to time series of daily mean surface air temperature generated by the Oregon State University atmospheric general circulation model. The test application provides estimates of the magnitude of change in variability that the procedure should be likely to detect.
Abstract
A statistical procedure is described for making inferences about changes in climate variability. The fundamental question of how to define climate variability is first addressed, and a definition of intrinsic climate variability based on a “prewhitening” of the data is advocated. A test for changes in variability that is not sensitive to departures from the assumption of a Gaussian distribution for the data is outlined. In addition to establishing whether observed differences in variability are statistically significant, the procedure provides confidence intervals for the ratio of variability. The technique is applied to time series of daily mean surface air temperature generated by the Oregon State University atmospheric general circulation model. The test application provides estimates of the magnitude of change in variability that the procedure should be likely to detect.
Abstract
Statistical problems that may be encountered in fitting autoregressive-moving average (ARMA) processes to meteorological time series are described. Techniques that lead to an increased likelihood of choosing the most appropriate ARMA process to model the data at hand are emphasized. One specific meteorological application of ARMA processes, the modeling of Palmer Drought Index time series for climatic divisions of the United States is considered in detail. It is shown that low-order purely autoregressive processes adequately fit these data.
Abstract
Statistical problems that may be encountered in fitting autoregressive-moving average (ARMA) processes to meteorological time series are described. Techniques that lead to an increased likelihood of choosing the most appropriate ARMA process to model the data at hand are emphasized. One specific meteorological application of ARMA processes, the modeling of Palmer Drought Index time series for climatic divisions of the United States is considered in detail. It is shown that low-order purely autoregressive processes adequately fit these data.
Abstract
Many climatic applications, including detection of climate change, require temperature time series that are free from discontinuities introduced by nonclimatic events such as relocation of weather stations. Although much attention has been devoted to discontinuities in the mean, possible changes in the variance have not been considered. A method is proposed to test and possibly adjust for nonclimatic inhomogeneities in the variance of temperature time series. The method is somewhat analogous to that developed by Karl and Williams to adjust for nonclimatic inhomogeneities in the mean. It uses the nonparametric bootstrap technique to compute confidence intervals for the discontinuity in variance. The method is tested on 1901–88 summer and winter mean maximum temperature data from 21 weather stations in the midwestern United States. The reasonableness, reliability, and accuracy of the estimated changes in variance are evaluated.
The bootstrap technique is found to be a valuable tool for obtaining confidence limits on the proposed variance adjustment. Inhomogeneities in variance are found to be more frequent than would be expected by chance in the summer temperature data, indicating that variance inhomogeneity is indeed a problem. Precision of the estimates in the test data indicates that changes of about 25%–30% in standard deviation can be detected if sufficient data are available. However, estimates of the changes in the standard deviation may be unreliable when less than 10 years of data are available before or after a potential discontinuity. This statistical test can be a useful tool for screening out stations that have unacceptably large discontinuities in variance.
Abstract
Many climatic applications, including detection of climate change, require temperature time series that are free from discontinuities introduced by nonclimatic events such as relocation of weather stations. Although much attention has been devoted to discontinuities in the mean, possible changes in the variance have not been considered. A method is proposed to test and possibly adjust for nonclimatic inhomogeneities in the variance of temperature time series. The method is somewhat analogous to that developed by Karl and Williams to adjust for nonclimatic inhomogeneities in the mean. It uses the nonparametric bootstrap technique to compute confidence intervals for the discontinuity in variance. The method is tested on 1901–88 summer and winter mean maximum temperature data from 21 weather stations in the midwestern United States. The reasonableness, reliability, and accuracy of the estimated changes in variance are evaluated.
The bootstrap technique is found to be a valuable tool for obtaining confidence limits on the proposed variance adjustment. Inhomogeneities in variance are found to be more frequent than would be expected by chance in the summer temperature data, indicating that variance inhomogeneity is indeed a problem. Precision of the estimates in the test data indicates that changes of about 25%–30% in standard deviation can be detected if sufficient data are available. However, estimates of the changes in the standard deviation may be unreliable when less than 10 years of data are available before or after a potential discontinuity. This statistical test can be a useful tool for screening out stations that have unacceptably large discontinuities in variance.
Abstract
One particular index has been commonly used to monitor precipitation in drought-prone regions such as the West African Sahel and the Brazilian Northeast. The construction of this index involves standardizing the annual total rainfall for an individual nation and then averaging these standardized rainfall deviations over all the stations within the region to obtain a single value. Some theoretical properties of this “Standardized Anomaly Index” are derived. By studying its behavior when applied to actual rainfall data in the Sahel, certain aspects of the practical utility of the index are also considered. For instance, the claim that the Sahel has recently experienced a long run of relatively dry years does not appear to be sensitive to the exact form of index that is employed. On the other hand, it is shown by means of principal components analysis that no single index can “explain” a large portion of the variation in Sahelian rainfall, implying that much information, that is at least potentially useful, is lost when one relies only on a single index. The implications of these results for assessments of the impact of drought on society in arid and semiarid regions are discussed.
Abstract
One particular index has been commonly used to monitor precipitation in drought-prone regions such as the West African Sahel and the Brazilian Northeast. The construction of this index involves standardizing the annual total rainfall for an individual nation and then averaging these standardized rainfall deviations over all the stations within the region to obtain a single value. Some theoretical properties of this “Standardized Anomaly Index” are derived. By studying its behavior when applied to actual rainfall data in the Sahel, certain aspects of the practical utility of the index are also considered. For instance, the claim that the Sahel has recently experienced a long run of relatively dry years does not appear to be sensitive to the exact form of index that is employed. On the other hand, it is shown by means of principal components analysis that no single index can “explain” a large portion of the variation in Sahelian rainfall, implying that much information, that is at least potentially useful, is lost when one relies only on a single index. The implications of these results for assessments of the impact of drought on society in arid and semiarid regions are discussed.
Abstract
A relative measure of actual, rather than potential, predictability of a meteorological variable on the basis of its past history alone is proposed. This measure is predicated on the existence of a parametric time series model to represent the meteorological variable. Among other things, it provides an explicit representation of forecasting capability in terms of the individual parameters of such time series models.
As an application, the extent to which the Southern Oscillation (S0), a major component of climate, can be predicted on a monthly as well as a seasonal time scale on the basis of its past history alone is determined. In particular, on a monthly time scale up to about 44% of the variation in SO can be predicted one month ahead (zero months lead time) and about 35% two months ahead (one month lead time), or on a seasonal time scale about 53% one season ahead (zero seasons lead time) and about 31% two masons ahead (one season lead time). In general, the degree of predictability naturally decays as the lead time increases with essentially no predictability on a monthly time scale beyond ten months (nine months lead time) or on a seasonal time scale beyond seasons (two seasons lead time).
Abstract
A relative measure of actual, rather than potential, predictability of a meteorological variable on the basis of its past history alone is proposed. This measure is predicated on the existence of a parametric time series model to represent the meteorological variable. Among other things, it provides an explicit representation of forecasting capability in terms of the individual parameters of such time series models.
As an application, the extent to which the Southern Oscillation (S0), a major component of climate, can be predicted on a monthly as well as a seasonal time scale on the basis of its past history alone is determined. In particular, on a monthly time scale up to about 44% of the variation in SO can be predicted one month ahead (zero months lead time) and about 35% two months ahead (one month lead time), or on a seasonal time scale about 53% one season ahead (zero seasons lead time) and about 31% two masons ahead (one season lead time). In general, the degree of predictability naturally decays as the lead time increases with essentially no predictability on a monthly time scale beyond ten months (nine months lead time) or on a seasonal time scale beyond seasons (two seasons lead time).