Search Results
You are looking at 11 - 20 of 33 items for
- Author or Editor: Richard W. Katz x
- Refine by Access: All Content x
Abstract
One particular index has been commonly used to monitor precipitation in drought-prone regions such as the West African Sahel and the Brazilian Northeast. The construction of this index involves standardizing the annual total rainfall for an individual nation and then averaging these standardized rainfall deviations over all the stations within the region to obtain a single value. Some theoretical properties of this “Standardized Anomaly Index” are derived. By studying its behavior when applied to actual rainfall data in the Sahel, certain aspects of the practical utility of the index are also considered. For instance, the claim that the Sahel has recently experienced a long run of relatively dry years does not appear to be sensitive to the exact form of index that is employed. On the other hand, it is shown by means of principal components analysis that no single index can “explain” a large portion of the variation in Sahelian rainfall, implying that much information, that is at least potentially useful, is lost when one relies only on a single index. The implications of these results for assessments of the impact of drought on society in arid and semiarid regions are discussed.
Abstract
One particular index has been commonly used to monitor precipitation in drought-prone regions such as the West African Sahel and the Brazilian Northeast. The construction of this index involves standardizing the annual total rainfall for an individual nation and then averaging these standardized rainfall deviations over all the stations within the region to obtain a single value. Some theoretical properties of this “Standardized Anomaly Index” are derived. By studying its behavior when applied to actual rainfall data in the Sahel, certain aspects of the practical utility of the index are also considered. For instance, the claim that the Sahel has recently experienced a long run of relatively dry years does not appear to be sensitive to the exact form of index that is employed. On the other hand, it is shown by means of principal components analysis that no single index can “explain” a large portion of the variation in Sahelian rainfall, implying that much information, that is at least potentially useful, is lost when one relies only on a single index. The implications of these results for assessments of the impact of drought on society in arid and semiarid regions are discussed.
Abstract
A relative measure of actual, rather than potential, predictability of a meteorological variable on the basis of its past history alone is proposed. This measure is predicated on the existence of a parametric time series model to represent the meteorological variable. Among other things, it provides an explicit representation of forecasting capability in terms of the individual parameters of such time series models.
As an application, the extent to which the Southern Oscillation (S0), a major component of climate, can be predicted on a monthly as well as a seasonal time scale on the basis of its past history alone is determined. In particular, on a monthly time scale up to about 44% of the variation in SO can be predicted one month ahead (zero months lead time) and about 35% two months ahead (one month lead time), or on a seasonal time scale about 53% one season ahead (zero seasons lead time) and about 31% two masons ahead (one season lead time). In general, the degree of predictability naturally decays as the lead time increases with essentially no predictability on a monthly time scale beyond ten months (nine months lead time) or on a seasonal time scale beyond seasons (two seasons lead time).
Abstract
A relative measure of actual, rather than potential, predictability of a meteorological variable on the basis of its past history alone is proposed. This measure is predicated on the existence of a parametric time series model to represent the meteorological variable. Among other things, it provides an explicit representation of forecasting capability in terms of the individual parameters of such time series models.
As an application, the extent to which the Southern Oscillation (S0), a major component of climate, can be predicted on a monthly as well as a seasonal time scale on the basis of its past history alone is determined. In particular, on a monthly time scale up to about 44% of the variation in SO can be predicted one month ahead (zero months lead time) and about 35% two months ahead (one month lead time), or on a seasonal time scale about 53% one season ahead (zero seasons lead time) and about 31% two masons ahead (one season lead time). In general, the degree of predictability naturally decays as the lead time increases with essentially no predictability on a monthly time scale beyond ten months (nine months lead time) or on a seasonal time scale beyond seasons (two seasons lead time).
Abstract
An index consisting of the difference of normalized sea level pressure departures between Tahiti and Darwin is used to represent the Southern Oscillation (SO) fluctuations. Using a time-domain approach, autoregressive-moving average (ARMA) progress are applied to model and predict this Southern Oscillation Index (SOI) on a monthly and seasonal basis. The ARMA process which is chosen to fit the monthly SOI expresses the index for the current month as a function of both the SOI one month and seven (or nine) mouths ago, as well as the current and previous month's random error. A purely automotive (AR) process is identified as representative of the seasonal SO fluctuations, with the SOI for the current season being derived from the index for the immediate past three seasons and a single random disturbance term for the current season. To allow for the phase locking of the SOI with the annual cycle, ARMA processes with seasonally varying coefficients are also considered.
As one example of how these models could be used, seasonal SO variations have been forecast. When SOI observations from 1935 through the summer of 1983 are employed, the seasonal model indicates forecast of positive SOI from fall 1983 through fall 1984. Forecasts based only on SOI observations from 1935 through spring 1982 show a low predictive skill for the SOI values from summer 1982 through winter 1984, whereas one-season-ahead forecasts starting with summer 1982 agree reasonably well with the actual SOI observations. These examples help illustrate the degree to which the future behavior of the SOI is predictable on the basis of its past history alone.
Abstract
An index consisting of the difference of normalized sea level pressure departures between Tahiti and Darwin is used to represent the Southern Oscillation (SO) fluctuations. Using a time-domain approach, autoregressive-moving average (ARMA) progress are applied to model and predict this Southern Oscillation Index (SOI) on a monthly and seasonal basis. The ARMA process which is chosen to fit the monthly SOI expresses the index for the current month as a function of both the SOI one month and seven (or nine) mouths ago, as well as the current and previous month's random error. A purely automotive (AR) process is identified as representative of the seasonal SO fluctuations, with the SOI for the current season being derived from the index for the immediate past three seasons and a single random disturbance term for the current season. To allow for the phase locking of the SOI with the annual cycle, ARMA processes with seasonally varying coefficients are also considered.
As one example of how these models could be used, seasonal SO variations have been forecast. When SOI observations from 1935 through the summer of 1983 are employed, the seasonal model indicates forecast of positive SOI from fall 1983 through fall 1984. Forecasts based only on SOI observations from 1935 through spring 1982 show a low predictive skill for the SOI values from summer 1982 through winter 1984, whereas one-season-ahead forecasts starting with summer 1982 agree reasonably well with the actual SOI observations. These examples help illustrate the degree to which the future behavior of the SOI is predictable on the basis of its past history alone.
Abstract
The Richardson model is a popular technique for stochastic simulation of daily weather variables, including precipitation amount, maximum and minimum temperature, and solar radiation. This model is extended to include two additional variables, daily mean wind speed and dewpoint, because these variables (or related quantities such as relative humidity) are required as inputs for certain ecological/vegetation response and agricultural management models. To allow for the positively skewed distribution of wind speed, a power transformation is applied. Solar radiation also is transformed to make the shape of its modeled distribution more realistic. A model identification criterion is used as an aid in determining whether the distributions of these two variables depend on precipitation occurrence. The approach can be viewed as an integration of what is known about the statistical properties of individual weather variables into a single multivariate model.
As an application, this extended model is fitted to weather data in the Pacific Northwest. To aid in understanding how such a stochastic weather generator works, considerable attention is devoted to its statistical properties. In particular, marginal and conditional distributions of wind speed and solar radiation are examined, with the model being capable of representing relationships between variables in which the variance is not constant, as well as certain forms of nonlinearity.
Abstract
The Richardson model is a popular technique for stochastic simulation of daily weather variables, including precipitation amount, maximum and minimum temperature, and solar radiation. This model is extended to include two additional variables, daily mean wind speed and dewpoint, because these variables (or related quantities such as relative humidity) are required as inputs for certain ecological/vegetation response and agricultural management models. To allow for the positively skewed distribution of wind speed, a power transformation is applied. Solar radiation also is transformed to make the shape of its modeled distribution more realistic. A model identification criterion is used as an aid in determining whether the distributions of these two variables depend on precipitation occurrence. The approach can be viewed as an integration of what is known about the statistical properties of individual weather variables into a single multivariate model.
As an application, this extended model is fitted to weather data in the Pacific Northwest. To aid in understanding how such a stochastic weather generator works, considerable attention is devoted to its statistical properties. In particular, marginal and conditional distributions of wind speed and solar radiation are examined, with the model being capable of representing relationships between variables in which the variance is not constant, as well as certain forms of nonlinearity.
Abstract
Stochastic models fit to time series of daily precipitation amount generally ignore any year-to-year (i.e., low frequency) source of random variation, and such models are known to underestimate the interannual variance of monthly or seasonal total precipitation. To explicitly account for this “overdispersion” phenomenon, a mixture model is proposed. A hidden index, taking on one of two possible states, is assumed to exist (perhaps representing different modes of atmospheric circulation). To represent the intermittency of precipitation and the tendency of wet or dry spells to persist, a stochastic model known as a chain-dependent process is applied. The parameters of this stochastic model are permitted to vary conditionally on the hidden index.
Data for one location in California (whose previous study motivated the present approach), as well as for another location in New Zealand, are analyzed. To estimate the parameters of a mixture of two conditional chain-dependent processes by maximum likelihood, the “expectation-maximization algorithm” is employed. It is demonstrated that this approach can either eliminate or greatly reduce the extent of the overdispersion phenomenon. Moreover, an attempt is made to relate the hidden indexes to observed features of atmospheric circulation. This approach to dealing with overdispersion is contrasted with the more prevalent alternative of fitting more complex stochastic models for high-frequency variations to time series of daily precipitation.
Abstract
Stochastic models fit to time series of daily precipitation amount generally ignore any year-to-year (i.e., low frequency) source of random variation, and such models are known to underestimate the interannual variance of monthly or seasonal total precipitation. To explicitly account for this “overdispersion” phenomenon, a mixture model is proposed. A hidden index, taking on one of two possible states, is assumed to exist (perhaps representing different modes of atmospheric circulation). To represent the intermittency of precipitation and the tendency of wet or dry spells to persist, a stochastic model known as a chain-dependent process is applied. The parameters of this stochastic model are permitted to vary conditionally on the hidden index.
Data for one location in California (whose previous study motivated the present approach), as well as for another location in New Zealand, are analyzed. To estimate the parameters of a mixture of two conditional chain-dependent processes by maximum likelihood, the “expectation-maximization algorithm” is employed. It is demonstrated that this approach can either eliminate or greatly reduce the extent of the overdispersion phenomenon. Moreover, an attempt is made to relate the hidden indexes to observed features of atmospheric circulation. This approach to dealing with overdispersion is contrasted with the more prevalent alternative of fitting more complex stochastic models for high-frequency variations to time series of daily precipitation.
Abstract
The theoretical spectra of certain parametric time series models with relevance to the Southern Oscillation (SO) are determined and compared with those based on a frequency-domain approach. Consistent spectral estimates are found for the two models selected in our earlier studies of the SO. All these results yield larger power at low frequencies and a dominant peak around 3–4 yr. Some reasons are offered for the slightly different behavior of the spectra as derived from the time-domain and frequency-domain approaches.
For the sake of comparison, the spectra of other simpler time series models are also calculated. While larger power is found at low frequencies, no spectral peak exists in these simpler models. Some implications of the quasi-periodic behavior found in the more complex models (i.e., an intermediate peak in the spectrum) are discussed in the context of the persistence and forecasting of the SO.
Abstract
The theoretical spectra of certain parametric time series models with relevance to the Southern Oscillation (SO) are determined and compared with those based on a frequency-domain approach. Consistent spectral estimates are found for the two models selected in our earlier studies of the SO. All these results yield larger power at low frequencies and a dominant peak around 3–4 yr. Some reasons are offered for the slightly different behavior of the spectra as derived from the time-domain and frequency-domain approaches.
For the sake of comparison, the spectra of other simpler time series models are also calculated. While larger power is found at low frequencies, no spectral peak exists in these simpler models. Some implications of the quasi-periodic behavior found in the more complex models (i.e., an intermediate peak in the spectrum) are discussed in the context of the persistence and forecasting of the SO.
Abstract
The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.
Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.
Abstract
The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.
Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.
Abstract
A reanalysis of the same Phoenix daily minimum and maximum temperature data examined by Balling et al. has been performed. As evidenced by substantial increasing trends in both the mean minimum and maximum temperatures, this area has experienced a marked heat island effect in recent decades. Balling et al. found that a statistical model for climate change in which simply a trend in the mean is permitted is inadequate to explain the observed trend in occurrence of extreme maximum temperatures. The present reanalysis establishes that by allowing for the observed decrease in the standard deviation, the tendency to overestimate the frequency of extreme high-temperature events is reduced. Thus, the urban heat island provides a real-world application in which trends in variability need to be taken into account to anticipate changes in the frequency of extreme events.
Abstract
A reanalysis of the same Phoenix daily minimum and maximum temperature data examined by Balling et al. has been performed. As evidenced by substantial increasing trends in both the mean minimum and maximum temperatures, this area has experienced a marked heat island effect in recent decades. Balling et al. found that a statistical model for climate change in which simply a trend in the mean is permitted is inadequate to explain the observed trend in occurrence of extreme maximum temperatures. The present reanalysis establishes that by allowing for the observed decrease in the standard deviation, the tendency to overestimate the frequency of extreme high-temperature events is reduced. Thus, the urban heat island provides a real-world application in which trends in variability need to be taken into account to anticipate changes in the frequency of extreme events.
Abstract
Simple stochastic models fit to time series of daily precipitation amount have a marked tendency to underestimate the observed (or interannual) variance of monthly (or seasonal) total precipitation. By considering extensions of one particular class of stochastic model known as a chain-dependent process, the extent to which this “overdispersion” phenomenon is attributable to an inadequate model for high-frequency variation of precipitation is examined. For daily precipitation amount in January at Chico, California, fitting more complex stochastic models greatly reduces the underestimation of the variance of monthly total precipitation. One source of overdispersion, the number of wet days, can be completely eliminated through the use of a higher-order Markov chain for daily precipitation occurrence. Nevertheless, some of the observed variance remains unexplained and could possibly be attributed to low-frequency variation (sometimes termed “potential predictability”). Of special interest is the fact that these more complex stochastic models still underestimate the monthly variance, more so than does an alternative approach, in which the simplest form of chain-dependent process is conditioned on an index of large-scale atmospheric circulation.
Abstract
Simple stochastic models fit to time series of daily precipitation amount have a marked tendency to underestimate the observed (or interannual) variance of monthly (or seasonal) total precipitation. By considering extensions of one particular class of stochastic model known as a chain-dependent process, the extent to which this “overdispersion” phenomenon is attributable to an inadequate model for high-frequency variation of precipitation is examined. For daily precipitation amount in January at Chico, California, fitting more complex stochastic models greatly reduces the underestimation of the variance of monthly total precipitation. One source of overdispersion, the number of wet days, can be completely eliminated through the use of a higher-order Markov chain for daily precipitation occurrence. Nevertheless, some of the observed variance remains unexplained and could possibly be attributed to low-frequency variation (sometimes termed “potential predictability”). Of special interest is the fact that these more complex stochastic models still underestimate the monthly variance, more so than does an alternative approach, in which the simplest form of chain-dependent process is conditioned on an index of large-scale atmospheric circulation.
Abstract
The economic value of ensemble-based weather or climate forecasts is generally assessed by taking the ensembles at “face value.” That is, the forecast probability is estimated as the relative frequency of occurrence of an event among a limited number of ensemble members. Despite the economic value of probability forecasts being based on the concept of decision making under uncertainty, in effect, the decision maker is assumed to ignore the uncertainty in estimating this probability. Nevertheless, many users are certainly aware of the uncertainty inherent in a limited ensemble size. Bayesian prediction is used instead in this paper, incorporating such additional forecast uncertainty into the decision process. The face-value forecast probability estimator would correspond to a Bayesian analysis, with a prior distribution on the actual forecast probability only being appropriate if it were believed that the ensemble prediction system produces perfect forecasts. For the cost–loss decision-making model, the economic value of the face-value estimator can be negative for small ensemble sizes from a prediction system with a level of skill that is not sufficiently high. Further, this economic value has the counterintuitive property of sometimes decreasing as the ensemble size increases. For a more plausible form of prior distribution on the actual forecast probability, which could be viewed as a “recalibration” of face-value forecasts, the Bayesian estimator does not exhibit this unexpected behavior. Moreover, it is established that the effects of ensemble size on the reliability, skill, and economic value have been exaggerated by using the face-value, instead of the Bayesian, estimator.
Abstract
The economic value of ensemble-based weather or climate forecasts is generally assessed by taking the ensembles at “face value.” That is, the forecast probability is estimated as the relative frequency of occurrence of an event among a limited number of ensemble members. Despite the economic value of probability forecasts being based on the concept of decision making under uncertainty, in effect, the decision maker is assumed to ignore the uncertainty in estimating this probability. Nevertheless, many users are certainly aware of the uncertainty inherent in a limited ensemble size. Bayesian prediction is used instead in this paper, incorporating such additional forecast uncertainty into the decision process. The face-value forecast probability estimator would correspond to a Bayesian analysis, with a prior distribution on the actual forecast probability only being appropriate if it were believed that the ensemble prediction system produces perfect forecasts. For the cost–loss decision-making model, the economic value of the face-value estimator can be negative for small ensemble sizes from a prediction system with a level of skill that is not sufficiently high. Further, this economic value has the counterintuitive property of sometimes decreasing as the ensemble size increases. For a more plausible form of prior distribution on the actual forecast probability, which could be viewed as a “recalibration” of face-value forecasts, the Bayesian estimator does not exhibit this unexpected behavior. Moreover, it is established that the effects of ensemble size on the reliability, skill, and economic value have been exaggerated by using the face-value, instead of the Bayesian, estimator.