Search Results

You are looking at 1 - 10 of 30 items for :

  • Author or Editor: Barbara G. Brown x
  • Refine by Access: All Content x
Clear All Modify Search
Barbara G. Brown
and
Richard W. Katz

Abstract

The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.

Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.

Full access
Barbara G. Brown
and
Allan H. Murphy

Abstract

Fire-weather forecasts (FWFs) prepared by National Weather Service (NWS) forecasters on an operational basis are traditionally expressed in categorical terms. However, to make rational and optimal use of such forecasts, fire managers need quantitative information concerning the uncertainty inherent in the forecasts. This paper reports the results of two studies related to the quantification of uncertainty in operational and experimental FWFs.

Evaluation of samples of operational categorical FWFs reveals that these forecasts contain considerable uncertainty. The forecasts also exhibit modest but consistent biases which suggest that the forecasters are influenced by the impacts of the relevant events on fire behavior. These results underscore the need for probabilistic FWFs.

The results of a probabilistic fire-weather forecasting experiment indicate that NWS forecasters are able to make quite reliable and reasonably precise credible interval temperature forecasts. However, the experimental relative humidity and wind speed forecasts exhibit considerable overforecasting and minimal skill. Although somewhat disappointing, these results are not too surprising in view of the fact that (a) the forecasters had little, if any, experience in probability forecasting; (b) no feedback was provided to the forecasters during the experimental period; and (c) the experiment was of quite limited duration. More extensive experimental and operational probability forecasting trials as well as user-oriented studies are required to enhance the quality of FWFs and to ensure that the forecasts are used in an optimal manner.

Full access
Allan H. Murphy
,
Barbara G. Brown
, and
Yin-Sheng Chen

Abstract

A diagnostic approach to forecast verification is described and illustrated. This approach is based on a general framework for forecast verification. It is “diagnostic” in the sense that it focuses on the fundamental characteristics of the forecasts, the corresponding observations, and their relationship.

Three classes of diagnostic verification methods are identified: 1) the joint distribution of forecasts and observations and conditional and marginal distributions associated with factorizations of this joint distribution; 2) summary measures of these joint, conditional, and marginal distributions; and 3) performance measures and their decompositions. Linear regression models that can be used to describe the relationship between forecasts and observations are also presented. Graphical displays are advanced as a means of enhancing the utility of this body of diagnostic verification methodology.

A sample of National Weather Service maximum temperature forecasts (and observations) for Minneapolis, Minnesota, is analyzed to illustrate the use of this methodology. Graphical displays of the basic distributions and various summary measures are employed to obtain insights into distributional characteristics such as central tendency, variability, and asymmetry. The displays also facilitate the comparison of these characteristics among distributions–for example, between distributions involving forecasts and observations, among distributions involving different types of forecasts, and among distributions involving forecasts for different seasons or lead times. Performance measures and their decompositions are shown to provide quantitative information regarding basic dimensions of forecast quality such as bias, accuracy, calibration (or reliability), discrimination, and skill. Information regarding both distributional and performance characteristics is needed by modelers and forecasters concerned with improving forecast quality. Some implications of these diagnostic methods for verification procedures and practices are discussed.

Full access
Christopher A. Davis
,
Barbara G. Brown
,
Randy Bullock
, and
John Halley-Gotway

Abstract

The authors use a procedure called the method for object-based diagnostic evaluation, commonly referred to as MODE, to compare forecasts made from two models representing separate cores of the Weather Research and Forecasting (WRF) model during the 2005 National Severe Storms Laboratory and Storm Prediction Center Spring Program. Both models, the Advanced Research WRF (ARW) and the Nonhydrostatic Mesoscale Model (NMM), were run without a traditional cumulus parameterization scheme on horizontal grid lengths of 4 km (ARW) and 4.5 km (NMM). MODE was used to evaluate 1-h rainfall accumulation from 24-h forecasts valid at 0000 UTC on 32 days between 24 April and 4 June 2005. The primary variable used for evaluation was a “total interest” derived from a fuzzy-logic algorithm that compared several attributes of forecast and observed rain features such as separation distance and spatial orientation. The maximum value of the total interest obtained by comparing an object in one field with all objects in the comparison field was retained as the quality of matching for that object. The median of the distribution of all such maximum-interest values was selected as a metric of the overall forecast quality.

Results from the 32 cases suggest that, overall, the configuration of the ARW model used during the 2005 Spring Program performed slightly better than the configuration of the NMM model. The primary manifestation of the differing levels of performance was fewer false alarms, forecast rain areas with no observed counterpart, in the ARW. However, it was noted that the performance varied considerably from day to day, with most days featuring indistinguishable performance. Thus, a small number of poor NMM forecasts produced the overall difference between the two models.

Full access
Barbara G. Brown
,
Richard W. Katz
, and
Allan H. Murphy

Abstract

A general approach for modeling wind speed and wind power is described. Because wind power is a function of wind speed, the methodology is based on the development of a model of wind speed. Values of wind power are estimated by applying the appropriate transformations to values of wind speed. The wind speed modeling approach takes into account several basic features of wind speed data, including autocorrelation, non-Gaussian distribution, and diurnal nonstationarity. The positive correlation between consecutive wind speed observations is taken into account by fitting an autoregressive process to wind speed data transformed to make their distribution approximately Gaussian and standardized to remove diurnal nonstationarity.

As an example, the modeling approach is applied to a small set of hourly wind speed data from the Pacific Northwest. Use of the methodology for simulating and forecasting wind speed and wind power is discussed and an illustration of each of these types of applications is presented. To take into account the uncertainty of wind speed and wind power forecasts, techniques are presented for expressing the forecasts either in terms of confidence intervals or in terms of probabilities.

Full access
Barbara G. Brown
,
Richard W. Katz
, and
Allan H. Murphy

Abstract

The use of a concept called a precipitation “event” to obtain information regarding certain statistical properties of precipitation time series at a particular location and for a specific application (e.g., for modeling erosion) is described. Exploratory data analysis is used to examine several characteristics of more than 31 years of primitive precipitation events based on hourly precipitation data at Salem, Oregon. A primitive precipitation event is defined as one or more consecutive hours with at least 0.01 inches (0.25 mm) of precipitation. The characteristics of the events that are considered include the duration, magnitude, average intensity and maximum intensity of the event and the number of hours separating consecutive events.

By means of exploratory analysis of the characteristics of the precipitation events, it is demonstrated that the marginal (i.e., unconditional) distributions of the characteristics are positively skewed. Examination of the conditional distributions of some pairs of characteristics indicates the existence of some relationships among the characteristics. For example, it is found that average intensity and maximum intensity are quite dependent on the event duration. The existence and forms of these relationships indicate that the assumption commonly made in stochastic models of hourly precipitation time series that the intensities (i.e., hourly amounts within an event) are independent and identically distributed must be violated. Again using exploratory data analysis, it is shown that the hourly intensities at Salem are, in fact, stochastically increasing and positively associated within a precipitation event.

Full access
Gregory Thompson
,
Roelof T. Bruintjes
,
Barbara G. Brown
, and
Frank Hage

Abstract

The purpose of the Federal Aviation Administration’s Icing Forecasting Improvement Program is to conduct research on icing conditions both in flight and on the ground. This paper describes a portion of the in-flight aircraft icing prediction effort through a comprehensive icing prediction and evaluation project conducted by the Research Applications Program at the National Center for Atmospheric Research. During this project, in- flight icing potential was forecast using algorithms developed by RAP, the National Weather Service’s National Aviation Weather Advisory Unit, and the Air Force Global Weather Center in conjunction with numerical model data from the Eta, MAPS, and MM5 models. Furthermore, explicit predictions of cloud liquid water were available from the Eta and MM5 models and were also used to forecast icing potential.

To compare subjectively the different algorithms, predicted icing regions and observed pilot reports were viewed simultaneously on an interactive, real-time display. To measure objectively the skill of icing predictions, a rigorous statistical evaluation was performed in order to compare the different algorithms (details and results are provided in Part II). Both the subjective and objective comparisons are presented here for a particular case study, whereas results from the entire project are found in Part II. By statistically analyzing 2 months worth of data, it appears that further advances in temperature and relative-humidity-based algorithms are unlikely. Explicit cloud liquid water predictions, however, show promising results although still relatively new in operational numerical models.

Full access
David Ahijevych
,
Eric Gilleland
,
Barbara G. Brown
, and
Elizabeth E. Ebert

Abstract

Several spatial forecast verification methods have been developed that are suited for high-resolution precipitation forecasts. They can account for the spatial coherence of precipitation and give credit to a forecast that does not necessarily match the observation at any particular grid point. The methods were grouped into four broad categories (neighborhood, scale separation, features based, and field deformation) for the Spatial Forecast Verification Methods Intercomparison Project (ICP). Participants were asked to apply their new methods to a set of artificial geometric and perturbed forecasts with prescribed errors, and a set of real forecasts of convective precipitation on a 4-km grid. This paper describes the intercomparison test cases, summarizes results from the geometric cases, and presents subjective scores and traditional scores from the real cases.

All the new methods could detect bias error, and the features-based and field deformation methods were also able to diagnose displacement errors of precipitation features. The best approach for capturing errors in aspect ratio was field deformation. When comparing model forecasts with real cases, the traditional verification scores did not agree with the subjective assessment of the forecasts.

Full access
Eric Gilleland
,
David Ahijevych
,
Barbara G. Brown
,
Barbara Casati
, and
Elizabeth E. Ebert

Abstract

Advancements in weather forecast models and their enhanced resolution have led to substantially improved and more realistic-appearing forecasts for some variables. However, traditional verification scores often indicate poor performance because of the increased small-scale variability so that the true quality of the forecasts is not always characterized well. As a result, numerous new methods for verifying these forecasts have been proposed. These new methods can mostly be classified into two overall categories: filtering methods and displacement methods. The filtering methods can be further delineated into neighborhood and scale separation, and the displacement methods can be divided into features based and field deformation. Each method gives considerably more information than the traditional scores, but it is not clear which method(s) should be used for which purpose.

A verification methods intercomparison project has been established in order to glean a better understanding of the proposed methods in terms of their various characteristics and to determine what verification questions each method addresses. The study is ongoing, and preliminary qualitative results for the different approaches applied to different situations are described here. In particular, the various methods and their basic characteristics, similarities, and differences are described. In addition, several questions are addressed regarding the application of the methods and the information that they provide. These questions include (i) how the method(s) inform performance at different scales; (ii) how the methods provide information on location errors; (iii) whether the methods provide information on intensity errors and distributions; (iv) whether the methods provide information on structure errors; (v) whether the approaches have the ability to provide information about hits, misses, and false alarms; (vi) whether the methods do anything that is counterintuitive; (vii) whether the methods have selectable parameters and how sensitive the results are to parameter selection; (viii) whether the results can be easily aggregated across multiple cases; (ix) whether the methods can identify timing errors; and (x) whether confidence intervals and hypothesis tests can be readily computed.

Full access
Eric Gilleland
,
Thomas C. M. Lee
,
John Halley Gotway
,
R. G. Bullock
, and
Barbara G. Brown

Abstract

An important focus of research in the forecast verification community is the development of alternative verification approaches for quantitative precipitation forecasts, as well as for other spatial forecasts. The need for information that is meaningful in an operational context and the importance of capturing the specific sources of forecast error at varying spatial scales are two primary motivating factors. In this paper, features of precipitation as identified by a convolution threshold technique are merged within fields and matched across fields in an automatic and computationally efficient manner using Baddeley’s metric for binary images.

The method is carried out on 100 test cases, and 4 representative cases are shown in detail. Results of merging and matching objects are generally positive in that they are consistent with how a subjective observer might merge and match features. The results further suggest that the Baddeley metric may be useful as a computationally efficient summary metric giving information about location, shape, and size differences of individual features, which could be employed for other spatial forecast verification methods.

Full access