Search Results

You are looking at 1 - 10 of 44 items for

  • Author or Editor: Barbara Brown x
  • Refine by Access: All Content x
Clear All Modify Search
Barbara G. Brown
and
Richard W. Katz

Abstract

The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.

Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.

Full access
Allan H. Murphy
and
Barbara G. Brown

This paper reports some results of a study in which two groups of individuals—undergraduate students and professional meteorologists at Oregon State University—completed a short questionnaire concerning their interpretations of terminology commonly used in public weather forecasts. The questions related to terms and phrases associated with three elements: 1) cloudiness—fraction of sky cover; 2) precipitation—spatial and/or temporal variations; and 3) temperature—specification of intervals.

The students' responses indicate that cloudiness terms are subject to wide and overlapping ranges of interpretation, although the interpretations of these terms correspond quite well to National Weather Service definitions. Their responses to the precipitation and temperature questions reveal that some confusion exists concerning the meaning of spatial and temporal modifiers in precipitation forecasts and that some individuals interpret temperature ranges in terms of asymmetric intervals. When compared to the students' responses, the meteorologists' responses exhibit narrower ranges of interpretation of the cloudiness terms and less confusion about the meaning of spatial/temporal precipitation modifiers.

The study was not intended to be a definitive analysis of public understanding of forecast terminology. Instead, it should be viewed as a primitive form of the type of forecast-terminology study that must be undertaken in the future. Some implications of this investigation for future work in the area are discussed briefly.

Full access
Allan H. Murphy
and
Barbara G. Brown

Worded forecasts, which generally consist of both verbal and numerical expressions, play an important role in the communication of weather information to the general public. However, relatively few studies of the composition and interpretation of such forecasts have been conducted. Moreover, the studies that have been undertaken to date indicate that many expressions currently used in public forecasts are subject to wide ranges of interpretation (and to misinterpretation) and that the ability of individuals to recall the content of worded forecasts is quite limited. This paper focuses on forecast terminology and the understanding of such terminology in the context of short-range public weather forecasts.

The results of previous studies of forecast terminology (and related issues) are summarized with respect to six basic aspects or facets of worded forecasts. These facets include: 1) events (the values of the meteorological variables): 2) terminology (the words used to describe the events); 3) words versus numbers (the use of verbal and/or numerical expressions); 4) uncertainty (the mode of expression of uncertainty); 5) amount of information (the number of items of information); and 6) content and format (the selection of items of information and their placement). In addition, some related topics are treated briefly, including the impact of verification systems, the role of computer-worded forecasts, the implications of new modes of communication, and the use of weather forecasts.

Some conclusions and inferences that can be drawn from this review of previous work are discussed briefly, and a set of recommendations are presented regarding steps that should be taken to raise the level of understanding and enhance the usefulness of worded forecasts. These recommendations are organized under four headings: 1) studies of public understanding, interpretation, and use; 2) management practices; 3) forecaster training and education; and 4) public education.

Full access
Christopher Davis
,
Barbara Brown
, and
Randy Bullock

Abstract

A recently developed method of defining rain areas for the purpose of verifying precipitation produced by numerical weather prediction models is described. Precipitation objects are defined in both forecasts and observations based on a convolution (smoothing) and thresholding procedure. In an application of the new verification approach, the forecasts produced by the Weather Research and Forecasting (WRF) model are evaluated on a 22-km grid covering the continental United States during July–August 2001. Observed rainfall is derived from the stage-IV product from NCEP on a 4-km grid (averaged to a 22-km grid). It is found that the WRF produces too many large rain areas, and the spatial and temporal distribution of the rain areas reveals regional underestimates of the diurnal cycle in rain-area occurrence frequency. Objects in the two datasets are then matched according to the separation distance of their centroids. Overall, WRF rain errors exhibit no large biases in location, but do suffer from a positive size bias that maximizes during the later afternoon. This coincides with an excessive narrowing of the rainfall intensity range, consistent with the dominance of parameterized convection. Finally, matching ability has a strong dependence on object size and is interpreted as the influence of relatively predictable synoptic-scale systems on the larger areas.

Full access
Christopher Davis
,
Barbara Brown
, and
Randy Bullock

Abstract

The authors develop and apply an algorithm to define coherent areas of precipitation, emphasizing mesoscale convection, and compare properties of these areas with observations obtained from NCEP stage-IV precipitation analyses (gauge and radar combined). In Part II, fully explicit 12–36-h forecasts of rainfall from the Weather Research and Forecasting model (WRF) are evaluated. These forecasts are integrated on a 4-km mesh without a cumulus parameterization. Rain areas are defined similarly to Part I, but emphasize more intense, smaller areas. Furthermore, a time-matching algorithm is devised to group spatially and temporally coherent areas into rain systems that approximate mesoscale convective systems. In general, the WRF model produces too many rain areas with length scales of 80 km or greater. Rain systems typically last too long, and are forecast to occur 1–2 h later than observed. The intensity distribution among rain systems in the 4-km forecasts is generally too broad, especially in the late afternoon, in sharp contrast to the intensity distribution obtained on a coarser grid with parameterized convection in Part I. The model exhibits the largest positive size and intensity bias associated with systems over the Midwest and Mississippi Valley regions, but little size bias over the High Plains, Ohio Valley, and the southeast United States. For rain systems lasting 6 h or more, the critical success index for matching forecast and observed rain systems agrees closely with that obtained in a related study using manually determined rain systems.

Full access
Barbara G. Brown
and
Allan H. Murphy

Abstract

Fire-weather forecasts (FWFs) prepared by National Weather Service (NWS) forecasters on an operational basis are traditionally expressed in categorical terms. However, to make rational and optimal use of such forecasts, fire managers need quantitative information concerning the uncertainty inherent in the forecasts. This paper reports the results of two studies related to the quantification of uncertainty in operational and experimental FWFs.

Evaluation of samples of operational categorical FWFs reveals that these forecasts contain considerable uncertainty. The forecasts also exhibit modest but consistent biases which suggest that the forecasters are influenced by the impacts of the relevant events on fire behavior. These results underscore the need for probabilistic FWFs.

The results of a probabilistic fire-weather forecasting experiment indicate that NWS forecasters are able to make quite reliable and reasonably precise credible interval temperature forecasts. However, the experimental relative humidity and wind speed forecasts exhibit considerable overforecasting and minimal skill. Although somewhat disappointing, these results are not too surprising in view of the fact that (a) the forecasters had little, if any, experience in probability forecasting; (b) no feedback was provided to the forecasters during the experimental period; and (c) the experiment was of quite limited duration. More extensive experimental and operational probability forecasting trials as well as user-oriented studies are required to enhance the quality of FWFs and to ensure that the forecasts are used in an optimal manner.

Full access
Gregor Skok
,
Joe Tribbia
,
Jože Rakovec
, and
Barbara Brown

Abstract

The Method for Object-based Diagnostic Evaluation (MODE) developed by Davis et al. is implemented and extended to characterize the temporal behavior of objects and to perform a diagnostic analysis on the spatial distribution and properties of precipitation systems over the equatorial Pacific Ocean. The analysis is performed on two satellite-derived datasets [Tropical Rainfall Measuring Mission (TRMM) 3B42 and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN)]. A sensitivity analysis showed that temporal convolution produces an unwanted “spillover” effect and that a large spatial convolution radius produces too much smoothing, which results in unrealistically large objects. The analysis showed that the largest and most long-lived precipitation systems in the tropical Pacific are typically located in the western part. A good ability to track precipitation systems in the tropical Pacific was demonstrated: movement of precipitation systems in the ITCZ is both westward and eastward although westward movement is more frequent and in the eastern part of the Pacific ITCZ the westward movement is dominant. Movement of systems in the midlatitudes was predominantly eastward. These findings were common to both satellite products, despite the fact that the average rainfall accumulation can differ by 20%–30% and the occurrence of systems with long life spans can differ by 20%.

Full access
Scott Sandgathe
,
Barbara Brown
,
Brian Etherton
, and
Edward Tollerud
Full access
Eric Gilleland
,
David Ahijevych
,
Barbara G. Brown
,
Barbara Casati
, and
Elizabeth E. Ebert

Abstract

Advancements in weather forecast models and their enhanced resolution have led to substantially improved and more realistic-appearing forecasts for some variables. However, traditional verification scores often indicate poor performance because of the increased small-scale variability so that the true quality of the forecasts is not always characterized well. As a result, numerous new methods for verifying these forecasts have been proposed. These new methods can mostly be classified into two overall categories: filtering methods and displacement methods. The filtering methods can be further delineated into neighborhood and scale separation, and the displacement methods can be divided into features based and field deformation. Each method gives considerably more information than the traditional scores, but it is not clear which method(s) should be used for which purpose.

A verification methods intercomparison project has been established in order to glean a better understanding of the proposed methods in terms of their various characteristics and to determine what verification questions each method addresses. The study is ongoing, and preliminary qualitative results for the different approaches applied to different situations are described here. In particular, the various methods and their basic characteristics, similarities, and differences are described. In addition, several questions are addressed regarding the application of the methods and the information that they provide. These questions include (i) how the method(s) inform performance at different scales; (ii) how the methods provide information on location errors; (iii) whether the methods provide information on intensity errors and distributions; (iv) whether the methods provide information on structure errors; (v) whether the approaches have the ability to provide information about hits, misses, and false alarms; (vi) whether the methods do anything that is counterintuitive; (vii) whether the methods have selectable parameters and how sensitive the results are to parameter selection; (viii) whether the results can be easily aggregated across multiple cases; (ix) whether the methods can identify timing errors; and (x) whether confidence intervals and hypothesis tests can be readily computed.

Full access
Allan H. Murphy
,
Barbara G. Brown
, and
Yin-Sheng Chen

Abstract

A diagnostic approach to forecast verification is described and illustrated. This approach is based on a general framework for forecast verification. It is “diagnostic” in the sense that it focuses on the fundamental characteristics of the forecasts, the corresponding observations, and their relationship.

Three classes of diagnostic verification methods are identified: 1) the joint distribution of forecasts and observations and conditional and marginal distributions associated with factorizations of this joint distribution; 2) summary measures of these joint, conditional, and marginal distributions; and 3) performance measures and their decompositions. Linear regression models that can be used to describe the relationship between forecasts and observations are also presented. Graphical displays are advanced as a means of enhancing the utility of this body of diagnostic verification methodology.

A sample of National Weather Service maximum temperature forecasts (and observations) for Minneapolis, Minnesota, is analyzed to illustrate the use of this methodology. Graphical displays of the basic distributions and various summary measures are employed to obtain insights into distributional characteristics such as central tendency, variability, and asymmetry. The displays also facilitate the comparison of these characteristics among distributions–for example, between distributions involving forecasts and observations, among distributions involving different types of forecasts, and among distributions involving forecasts for different seasons or lead times. Performance measures and their decompositions are shown to provide quantitative information regarding basic dimensions of forecast quality such as bias, accuracy, calibration (or reliability), discrimination, and skill. Information regarding both distributional and performance characteristics is needed by modelers and forecasters concerned with improving forecast quality. Some implications of these diagnostic methods for verification procedures and practices are discussed.

Full access