Search Results

You are looking at 1 - 10 of 34 items for

  • Author or Editor: Barbara G. Brown x
  • Refine by Access: All Content x
Clear All Modify Search
Barbara G. Brown
and
Richard W. Katz

Abstract

The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.

Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.

Full access
Allan H. Murphy
and
Barbara G. Brown

This paper reports some results of a study in which two groups of individuals—undergraduate students and professional meteorologists at Oregon State University—completed a short questionnaire concerning their interpretations of terminology commonly used in public weather forecasts. The questions related to terms and phrases associated with three elements: 1) cloudiness—fraction of sky cover; 2) precipitation—spatial and/or temporal variations; and 3) temperature—specification of intervals.

The students' responses indicate that cloudiness terms are subject to wide and overlapping ranges of interpretation, although the interpretations of these terms correspond quite well to National Weather Service definitions. Their responses to the precipitation and temperature questions reveal that some confusion exists concerning the meaning of spatial and temporal modifiers in precipitation forecasts and that some individuals interpret temperature ranges in terms of asymmetric intervals. When compared to the students' responses, the meteorologists' responses exhibit narrower ranges of interpretation of the cloudiness terms and less confusion about the meaning of spatial/temporal precipitation modifiers.

The study was not intended to be a definitive analysis of public understanding of forecast terminology. Instead, it should be viewed as a primitive form of the type of forecast-terminology study that must be undertaken in the future. Some implications of this investigation for future work in the area are discussed briefly.

Full access
Allan H. Murphy
and
Barbara G. Brown

Worded forecasts, which generally consist of both verbal and numerical expressions, play an important role in the communication of weather information to the general public. However, relatively few studies of the composition and interpretation of such forecasts have been conducted. Moreover, the studies that have been undertaken to date indicate that many expressions currently used in public forecasts are subject to wide ranges of interpretation (and to misinterpretation) and that the ability of individuals to recall the content of worded forecasts is quite limited. This paper focuses on forecast terminology and the understanding of such terminology in the context of short-range public weather forecasts.

The results of previous studies of forecast terminology (and related issues) are summarized with respect to six basic aspects or facets of worded forecasts. These facets include: 1) events (the values of the meteorological variables): 2) terminology (the words used to describe the events); 3) words versus numbers (the use of verbal and/or numerical expressions); 4) uncertainty (the mode of expression of uncertainty); 5) amount of information (the number of items of information); and 6) content and format (the selection of items of information and their placement). In addition, some related topics are treated briefly, including the impact of verification systems, the role of computer-worded forecasts, the implications of new modes of communication, and the use of weather forecasts.

Some conclusions and inferences that can be drawn from this review of previous work are discussed briefly, and a set of recommendations are presented regarding steps that should be taken to raise the level of understanding and enhance the usefulness of worded forecasts. These recommendations are organized under four headings: 1) studies of public understanding, interpretation, and use; 2) management practices; 3) forecaster training and education; and 4) public education.

Full access
Barbara G. Brown
and
Allan H. Murphy

Abstract

Fire-weather forecasts (FWFs) prepared by National Weather Service (NWS) forecasters on an operational basis are traditionally expressed in categorical terms. However, to make rational and optimal use of such forecasts, fire managers need quantitative information concerning the uncertainty inherent in the forecasts. This paper reports the results of two studies related to the quantification of uncertainty in operational and experimental FWFs.

Evaluation of samples of operational categorical FWFs reveals that these forecasts contain considerable uncertainty. The forecasts also exhibit modest but consistent biases which suggest that the forecasters are influenced by the impacts of the relevant events on fire behavior. These results underscore the need for probabilistic FWFs.

The results of a probabilistic fire-weather forecasting experiment indicate that NWS forecasters are able to make quite reliable and reasonably precise credible interval temperature forecasts. However, the experimental relative humidity and wind speed forecasts exhibit considerable overforecasting and minimal skill. Although somewhat disappointing, these results are not too surprising in view of the fact that (a) the forecasters had little, if any, experience in probability forecasting; (b) no feedback was provided to the forecasters during the experimental period; and (c) the experiment was of quite limited duration. More extensive experimental and operational probability forecasting trials as well as user-oriented studies are required to enhance the quality of FWFs and to ensure that the forecasts are used in an optimal manner.

Full access
Allan H. Murphy
,
Barbara G. Brown
, and
Yin-Sheng Chen

Abstract

A diagnostic approach to forecast verification is described and illustrated. This approach is based on a general framework for forecast verification. It is “diagnostic” in the sense that it focuses on the fundamental characteristics of the forecasts, the corresponding observations, and their relationship.

Three classes of diagnostic verification methods are identified: 1) the joint distribution of forecasts and observations and conditional and marginal distributions associated with factorizations of this joint distribution; 2) summary measures of these joint, conditional, and marginal distributions; and 3) performance measures and their decompositions. Linear regression models that can be used to describe the relationship between forecasts and observations are also presented. Graphical displays are advanced as a means of enhancing the utility of this body of diagnostic verification methodology.

A sample of National Weather Service maximum temperature forecasts (and observations) for Minneapolis, Minnesota, is analyzed to illustrate the use of this methodology. Graphical displays of the basic distributions and various summary measures are employed to obtain insights into distributional characteristics such as central tendency, variability, and asymmetry. The displays also facilitate the comparison of these characteristics among distributions–for example, between distributions involving forecasts and observations, among distributions involving different types of forecasts, and among distributions involving forecasts for different seasons or lead times. Performance measures and their decompositions are shown to provide quantitative information regarding basic dimensions of forecast quality such as bias, accuracy, calibration (or reliability), discrimination, and skill. Information regarding both distributional and performance characteristics is needed by modelers and forecasters concerned with improving forecast quality. Some implications of these diagnostic methods for verification procedures and practices are discussed.

Full access
Christopher A. Davis
,
Barbara G. Brown
,
Randy Bullock
, and
John Halley-Gotway

Abstract

The authors use a procedure called the method for object-based diagnostic evaluation, commonly referred to as MODE, to compare forecasts made from two models representing separate cores of the Weather Research and Forecasting (WRF) model during the 2005 National Severe Storms Laboratory and Storm Prediction Center Spring Program. Both models, the Advanced Research WRF (ARW) and the Nonhydrostatic Mesoscale Model (NMM), were run without a traditional cumulus parameterization scheme on horizontal grid lengths of 4 km (ARW) and 4.5 km (NMM). MODE was used to evaluate 1-h rainfall accumulation from 24-h forecasts valid at 0000 UTC on 32 days between 24 April and 4 June 2005. The primary variable used for evaluation was a “total interest” derived from a fuzzy-logic algorithm that compared several attributes of forecast and observed rain features such as separation distance and spatial orientation. The maximum value of the total interest obtained by comparing an object in one field with all objects in the comparison field was retained as the quality of matching for that object. The median of the distribution of all such maximum-interest values was selected as a metric of the overall forecast quality.

Results from the 32 cases suggest that, overall, the configuration of the ARW model used during the 2005 Spring Program performed slightly better than the configuration of the NMM model. The primary manifestation of the differing levels of performance was fewer false alarms, forecast rain areas with no observed counterpart, in the ARW. However, it was noted that the performance varied considerably from day to day, with most days featuring indistinguishable performance. Thus, a small number of poor NMM forecasts produced the overall difference between the two models.

Full access
Barbara G. Brown
,
Richard W. Katz
, and
Allan H. Murphy

Abstract

A general approach for modeling wind speed and wind power is described. Because wind power is a function of wind speed, the methodology is based on the development of a model of wind speed. Values of wind power are estimated by applying the appropriate transformations to values of wind speed. The wind speed modeling approach takes into account several basic features of wind speed data, including autocorrelation, non-Gaussian distribution, and diurnal nonstationarity. The positive correlation between consecutive wind speed observations is taken into account by fitting an autoregressive process to wind speed data transformed to make their distribution approximately Gaussian and standardized to remove diurnal nonstationarity.

As an example, the modeling approach is applied to a small set of hourly wind speed data from the Pacific Northwest. Use of the methodology for simulating and forecasting wind speed and wind power is discussed and an illustration of each of these types of applications is presented. To take into account the uncertainty of wind speed and wind power forecasts, techniques are presented for expressing the forecasts either in terms of confidence intervals or in terms of probabilities.

Full access
Barbara G. Brown
,
Richard W. Katz
, and
Allan H. Murphy

Abstract

The use of a concept called a precipitation “event” to obtain information regarding certain statistical properties of precipitation time series at a particular location and for a specific application (e.g., for modeling erosion) is described. Exploratory data analysis is used to examine several characteristics of more than 31 years of primitive precipitation events based on hourly precipitation data at Salem, Oregon. A primitive precipitation event is defined as one or more consecutive hours with at least 0.01 inches (0.25 mm) of precipitation. The characteristics of the events that are considered include the duration, magnitude, average intensity and maximum intensity of the event and the number of hours separating consecutive events.

By means of exploratory analysis of the characteristics of the precipitation events, it is demonstrated that the marginal (i.e., unconditional) distributions of the characteristics are positively skewed. Examination of the conditional distributions of some pairs of characteristics indicates the existence of some relationships among the characteristics. For example, it is found that average intensity and maximum intensity are quite dependent on the event duration. The existence and forms of these relationships indicate that the assumption commonly made in stochastic models of hourly precipitation time series that the intensities (i.e., hourly amounts within an event) are independent and identically distributed must be violated. Again using exploratory data analysis, it is shown that the hourly intensities at Salem are, in fact, stochastically increasing and positively associated within a precipitation event.

Full access
Barbara G. Brown
,
Richard W. Katz
, and
Allan H. Murphy

The so-called fallowing/planting problem is an example of a decision-making situation that is potentially sensitive to meteorological information. In this problem, wheat farmers in the drier, western portions of the northern Great Plains must decide each spring whether to plant a crop or to let their land lie fallow. Information that could be used to make this decision includes the soil moisture at planting time and a forecast of growing-season precipitation. A dynamic decision-making model is employed to investigate the economic value of such forecasts in the fallowing/planting situation.

Current seasonal-precipitation forecasts issued by the National Weather Service are found to have minimal economic value in this decision-making problem. However, relatively modest improvements in the quality of the forecasts would lead to quite large increases in value, and perfect information would possess considerable value. In addition, forecast value is found to be sensitive to changes in crop price and precipitation climatology. In particular, the shape of the curve relating forecast value to forecast quality is quite dependent on the amount of growing-season precipitation.

Full access
Gregory Thompson
,
Roelof T. Bruintjes
,
Barbara G. Brown
, and
Frank Hage

Abstract

The purpose of the Federal Aviation Administration’s Icing Forecasting Improvement Program is to conduct research on icing conditions both in flight and on the ground. This paper describes a portion of the in-flight aircraft icing prediction effort through a comprehensive icing prediction and evaluation project conducted by the Research Applications Program at the National Center for Atmospheric Research. During this project, in- flight icing potential was forecast using algorithms developed by RAP, the National Weather Service’s National Aviation Weather Advisory Unit, and the Air Force Global Weather Center in conjunction with numerical model data from the Eta, MAPS, and MM5 models. Furthermore, explicit predictions of cloud liquid water were available from the Eta and MM5 models and were also used to forecast icing potential.

To compare subjectively the different algorithms, predicted icing regions and observed pilot reports were viewed simultaneously on an interactive, real-time display. To measure objectively the skill of icing predictions, a rigorous statistical evaluation was performed in order to compare the different algorithms (details and results are provided in Part II). Both the subjective and objective comparisons are presented here for a particular case study, whereas results from the entire project are found in Part II. By statistically analyzing 2 months worth of data, it appears that further advances in temperature and relative-humidity-based algorithms are unlikely. Explicit cloud liquid water predictions, however, show promising results although still relatively new in operational numerical models.

Full access