Search Results
You are looking at 1 - 10 of 33 items for
- Author or Editor: Barbara G. Brown x
- Refine by Access: All Content x
Worded forecasts, which generally consist of both verbal and numerical expressions, play an important role in the communication of weather information to the general public. However, relatively few studies of the composition and interpretation of such forecasts have been conducted. Moreover, the studies that have been undertaken to date indicate that many expressions currently used in public forecasts are subject to wide ranges of interpretation (and to misinterpretation) and that the ability of individuals to recall the content of worded forecasts is quite limited. This paper focuses on forecast terminology and the understanding of such terminology in the context of short-range public weather forecasts.
The results of previous studies of forecast terminology (and related issues) are summarized with respect to six basic aspects or facets of worded forecasts. These facets include: 1) events (the values of the meteorological variables): 2) terminology (the words used to describe the events); 3) words versus numbers (the use of verbal and/or numerical expressions); 4) uncertainty (the mode of expression of uncertainty); 5) amount of information (the number of items of information); and 6) content and format (the selection of items of information and their placement). In addition, some related topics are treated briefly, including the impact of verification systems, the role of computer-worded forecasts, the implications of new modes of communication, and the use of weather forecasts.
Some conclusions and inferences that can be drawn from this review of previous work are discussed briefly, and a set of recommendations are presented regarding steps that should be taken to raise the level of understanding and enhance the usefulness of worded forecasts. These recommendations are organized under four headings: 1) studies of public understanding, interpretation, and use; 2) management practices; 3) forecaster training and education; and 4) public education.
Worded forecasts, which generally consist of both verbal and numerical expressions, play an important role in the communication of weather information to the general public. However, relatively few studies of the composition and interpretation of such forecasts have been conducted. Moreover, the studies that have been undertaken to date indicate that many expressions currently used in public forecasts are subject to wide ranges of interpretation (and to misinterpretation) and that the ability of individuals to recall the content of worded forecasts is quite limited. This paper focuses on forecast terminology and the understanding of such terminology in the context of short-range public weather forecasts.
The results of previous studies of forecast terminology (and related issues) are summarized with respect to six basic aspects or facets of worded forecasts. These facets include: 1) events (the values of the meteorological variables): 2) terminology (the words used to describe the events); 3) words versus numbers (the use of verbal and/or numerical expressions); 4) uncertainty (the mode of expression of uncertainty); 5) amount of information (the number of items of information); and 6) content and format (the selection of items of information and their placement). In addition, some related topics are treated briefly, including the impact of verification systems, the role of computer-worded forecasts, the implications of new modes of communication, and the use of weather forecasts.
Some conclusions and inferences that can be drawn from this review of previous work are discussed briefly, and a set of recommendations are presented regarding steps that should be taken to raise the level of understanding and enhance the usefulness of worded forecasts. These recommendations are organized under four headings: 1) studies of public understanding, interpretation, and use; 2) management practices; 3) forecaster training and education; and 4) public education.
This paper reports some results of a study in which two groups of individuals—undergraduate students and professional meteorologists at Oregon State University—completed a short questionnaire concerning their interpretations of terminology commonly used in public weather forecasts. The questions related to terms and phrases associated with three elements: 1) cloudiness—fraction of sky cover; 2) precipitation—spatial and/or temporal variations; and 3) temperature—specification of intervals.
The students' responses indicate that cloudiness terms are subject to wide and overlapping ranges of interpretation, although the interpretations of these terms correspond quite well to National Weather Service definitions. Their responses to the precipitation and temperature questions reveal that some confusion exists concerning the meaning of spatial and temporal modifiers in precipitation forecasts and that some individuals interpret temperature ranges in terms of asymmetric intervals. When compared to the students' responses, the meteorologists' responses exhibit narrower ranges of interpretation of the cloudiness terms and less confusion about the meaning of spatial/temporal precipitation modifiers.
The study was not intended to be a definitive analysis of public understanding of forecast terminology. Instead, it should be viewed as a primitive form of the type of forecast-terminology study that must be undertaken in the future. Some implications of this investigation for future work in the area are discussed briefly.
This paper reports some results of a study in which two groups of individuals—undergraduate students and professional meteorologists at Oregon State University—completed a short questionnaire concerning their interpretations of terminology commonly used in public weather forecasts. The questions related to terms and phrases associated with three elements: 1) cloudiness—fraction of sky cover; 2) precipitation—spatial and/or temporal variations; and 3) temperature—specification of intervals.
The students' responses indicate that cloudiness terms are subject to wide and overlapping ranges of interpretation, although the interpretations of these terms correspond quite well to National Weather Service definitions. Their responses to the precipitation and temperature questions reveal that some confusion exists concerning the meaning of spatial and temporal modifiers in precipitation forecasts and that some individuals interpret temperature ranges in terms of asymmetric intervals. When compared to the students' responses, the meteorologists' responses exhibit narrower ranges of interpretation of the cloudiness terms and less confusion about the meaning of spatial/temporal precipitation modifiers.
The study was not intended to be a definitive analysis of public understanding of forecast terminology. Instead, it should be viewed as a primitive form of the type of forecast-terminology study that must be undertaken in the future. Some implications of this investigation for future work in the area are discussed briefly.
Abstract
Fire-weather forecasts (FWFs) prepared by National Weather Service (NWS) forecasters on an operational basis are traditionally expressed in categorical terms. However, to make rational and optimal use of such forecasts, fire managers need quantitative information concerning the uncertainty inherent in the forecasts. This paper reports the results of two studies related to the quantification of uncertainty in operational and experimental FWFs.
Evaluation of samples of operational categorical FWFs reveals that these forecasts contain considerable uncertainty. The forecasts also exhibit modest but consistent biases which suggest that the forecasters are influenced by the impacts of the relevant events on fire behavior. These results underscore the need for probabilistic FWFs.
The results of a probabilistic fire-weather forecasting experiment indicate that NWS forecasters are able to make quite reliable and reasonably precise credible interval temperature forecasts. However, the experimental relative humidity and wind speed forecasts exhibit considerable overforecasting and minimal skill. Although somewhat disappointing, these results are not too surprising in view of the fact that (a) the forecasters had little, if any, experience in probability forecasting; (b) no feedback was provided to the forecasters during the experimental period; and (c) the experiment was of quite limited duration. More extensive experimental and operational probability forecasting trials as well as user-oriented studies are required to enhance the quality of FWFs and to ensure that the forecasts are used in an optimal manner.
Abstract
Fire-weather forecasts (FWFs) prepared by National Weather Service (NWS) forecasters on an operational basis are traditionally expressed in categorical terms. However, to make rational and optimal use of such forecasts, fire managers need quantitative information concerning the uncertainty inherent in the forecasts. This paper reports the results of two studies related to the quantification of uncertainty in operational and experimental FWFs.
Evaluation of samples of operational categorical FWFs reveals that these forecasts contain considerable uncertainty. The forecasts also exhibit modest but consistent biases which suggest that the forecasters are influenced by the impacts of the relevant events on fire behavior. These results underscore the need for probabilistic FWFs.
The results of a probabilistic fire-weather forecasting experiment indicate that NWS forecasters are able to make quite reliable and reasonably precise credible interval temperature forecasts. However, the experimental relative humidity and wind speed forecasts exhibit considerable overforecasting and minimal skill. Although somewhat disappointing, these results are not too surprising in view of the fact that (a) the forecasters had little, if any, experience in probability forecasting; (b) no feedback was provided to the forecasters during the experimental period; and (c) the experiment was of quite limited duration. More extensive experimental and operational probability forecasting trials as well as user-oriented studies are required to enhance the quality of FWFs and to ensure that the forecasts are used in an optimal manner.
Abstract
The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.
Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.
Abstract
The statistical theory of extreme values is applied to daily minimum and maximum temperature time series in the U.S. Midwest and Southeast. If the spatial pattern in the frequency of extreme temperature events can be explained simply by shifts in location and scale parameters (e.g., the mean and standard deviation) of the underlying temperature distribution, then the area under consideration could be termed a “region.” A regional analysis of temperature extremes suggests that the Type I extreme value distribution is a satisfactory model for extreme high temperatures. On the other hand, the Type III extreme value distribution (possibly with common shape parameter) is often a better model for extreme low temperatures. Hence, our concept of a region is appropriate when considering maximum temperature extremes, and perhaps also for minimum temperature extremes.
Based on this regional analysis, if a temporal climate change were analogous to a spatial relocation, then it would be possible to anticipate how the frequency of extreme temperature events might change. Moreover, if the Type III extreme value distribution were assumed instead of the more common Type I, then the sensitivity of the frequency of extremes to changes in the location and scale parameters would be greater.
Abstract
A general approach for modeling wind speed and wind power is described. Because wind power is a function of wind speed, the methodology is based on the development of a model of wind speed. Values of wind power are estimated by applying the appropriate transformations to values of wind speed. The wind speed modeling approach takes into account several basic features of wind speed data, including autocorrelation, non-Gaussian distribution, and diurnal nonstationarity. The positive correlation between consecutive wind speed observations is taken into account by fitting an autoregressive process to wind speed data transformed to make their distribution approximately Gaussian and standardized to remove diurnal nonstationarity.
As an example, the modeling approach is applied to a small set of hourly wind speed data from the Pacific Northwest. Use of the methodology for simulating and forecasting wind speed and wind power is discussed and an illustration of each of these types of applications is presented. To take into account the uncertainty of wind speed and wind power forecasts, techniques are presented for expressing the forecasts either in terms of confidence intervals or in terms of probabilities.
Abstract
A general approach for modeling wind speed and wind power is described. Because wind power is a function of wind speed, the methodology is based on the development of a model of wind speed. Values of wind power are estimated by applying the appropriate transformations to values of wind speed. The wind speed modeling approach takes into account several basic features of wind speed data, including autocorrelation, non-Gaussian distribution, and diurnal nonstationarity. The positive correlation between consecutive wind speed observations is taken into account by fitting an autoregressive process to wind speed data transformed to make their distribution approximately Gaussian and standardized to remove diurnal nonstationarity.
As an example, the modeling approach is applied to a small set of hourly wind speed data from the Pacific Northwest. Use of the methodology for simulating and forecasting wind speed and wind power is discussed and an illustration of each of these types of applications is presented. To take into account the uncertainty of wind speed and wind power forecasts, techniques are presented for expressing the forecasts either in terms of confidence intervals or in terms of probabilities.
Abstract
The use of a concept called a precipitation “event” to obtain information regarding certain statistical properties of precipitation time series at a particular location and for a specific application (e.g., for modeling erosion) is described. Exploratory data analysis is used to examine several characteristics of more than 31 years of primitive precipitation events based on hourly precipitation data at Salem, Oregon. A primitive precipitation event is defined as one or more consecutive hours with at least 0.01 inches (0.25 mm) of precipitation. The characteristics of the events that are considered include the duration, magnitude, average intensity and maximum intensity of the event and the number of hours separating consecutive events.
By means of exploratory analysis of the characteristics of the precipitation events, it is demonstrated that the marginal (i.e., unconditional) distributions of the characteristics are positively skewed. Examination of the conditional distributions of some pairs of characteristics indicates the existence of some relationships among the characteristics. For example, it is found that average intensity and maximum intensity are quite dependent on the event duration. The existence and forms of these relationships indicate that the assumption commonly made in stochastic models of hourly precipitation time series that the intensities (i.e., hourly amounts within an event) are independent and identically distributed must be violated. Again using exploratory data analysis, it is shown that the hourly intensities at Salem are, in fact, stochastically increasing and positively associated within a precipitation event.
Abstract
The use of a concept called a precipitation “event” to obtain information regarding certain statistical properties of precipitation time series at a particular location and for a specific application (e.g., for modeling erosion) is described. Exploratory data analysis is used to examine several characteristics of more than 31 years of primitive precipitation events based on hourly precipitation data at Salem, Oregon. A primitive precipitation event is defined as one or more consecutive hours with at least 0.01 inches (0.25 mm) of precipitation. The characteristics of the events that are considered include the duration, magnitude, average intensity and maximum intensity of the event and the number of hours separating consecutive events.
By means of exploratory analysis of the characteristics of the precipitation events, it is demonstrated that the marginal (i.e., unconditional) distributions of the characteristics are positively skewed. Examination of the conditional distributions of some pairs of characteristics indicates the existence of some relationships among the characteristics. For example, it is found that average intensity and maximum intensity are quite dependent on the event duration. The existence and forms of these relationships indicate that the assumption commonly made in stochastic models of hourly precipitation time series that the intensities (i.e., hourly amounts within an event) are independent and identically distributed must be violated. Again using exploratory data analysis, it is shown that the hourly intensities at Salem are, in fact, stochastically increasing and positively associated within a precipitation event.
Abstract
The purpose of the Federal Aviation Administration’s Icing Forecasting Improvement Program is to conduct research on icing conditions both in flight and on the ground. This paper describes a portion of the in-flight aircraft icing prediction effort through a comprehensive icing prediction and evaluation project conducted by the Research Applications Program at the National Center for Atmospheric Research. During this project, in- flight icing potential was forecast using algorithms developed by RAP, the National Weather Service’s National Aviation Weather Advisory Unit, and the Air Force Global Weather Center in conjunction with numerical model data from the Eta, MAPS, and MM5 models. Furthermore, explicit predictions of cloud liquid water were available from the Eta and MM5 models and were also used to forecast icing potential.
To compare subjectively the different algorithms, predicted icing regions and observed pilot reports were viewed simultaneously on an interactive, real-time display. To measure objectively the skill of icing predictions, a rigorous statistical evaluation was performed in order to compare the different algorithms (details and results are provided in Part II). Both the subjective and objective comparisons are presented here for a particular case study, whereas results from the entire project are found in Part II. By statistically analyzing 2 months worth of data, it appears that further advances in temperature and relative-humidity-based algorithms are unlikely. Explicit cloud liquid water predictions, however, show promising results although still relatively new in operational numerical models.
Abstract
The purpose of the Federal Aviation Administration’s Icing Forecasting Improvement Program is to conduct research on icing conditions both in flight and on the ground. This paper describes a portion of the in-flight aircraft icing prediction effort through a comprehensive icing prediction and evaluation project conducted by the Research Applications Program at the National Center for Atmospheric Research. During this project, in- flight icing potential was forecast using algorithms developed by RAP, the National Weather Service’s National Aviation Weather Advisory Unit, and the Air Force Global Weather Center in conjunction with numerical model data from the Eta, MAPS, and MM5 models. Furthermore, explicit predictions of cloud liquid water were available from the Eta and MM5 models and were also used to forecast icing potential.
To compare subjectively the different algorithms, predicted icing regions and observed pilot reports were viewed simultaneously on an interactive, real-time display. To measure objectively the skill of icing predictions, a rigorous statistical evaluation was performed in order to compare the different algorithms (details and results are provided in Part II). Both the subjective and objective comparisons are presented here for a particular case study, whereas results from the entire project are found in Part II. By statistically analyzing 2 months worth of data, it appears that further advances in temperature and relative-humidity-based algorithms are unlikely. Explicit cloud liquid water predictions, however, show promising results although still relatively new in operational numerical models.
Abstract
A diagnostic approach to forecast verification is described and illustrated. This approach is based on a general framework for forecast verification. It is “diagnostic” in the sense that it focuses on the fundamental characteristics of the forecasts, the corresponding observations, and their relationship.
Three classes of diagnostic verification methods are identified: 1) the joint distribution of forecasts and observations and conditional and marginal distributions associated with factorizations of this joint distribution; 2) summary measures of these joint, conditional, and marginal distributions; and 3) performance measures and their decompositions. Linear regression models that can be used to describe the relationship between forecasts and observations are also presented. Graphical displays are advanced as a means of enhancing the utility of this body of diagnostic verification methodology.
A sample of National Weather Service maximum temperature forecasts (and observations) for Minneapolis, Minnesota, is analyzed to illustrate the use of this methodology. Graphical displays of the basic distributions and various summary measures are employed to obtain insights into distributional characteristics such as central tendency, variability, and asymmetry. The displays also facilitate the comparison of these characteristics among distributions–for example, between distributions involving forecasts and observations, among distributions involving different types of forecasts, and among distributions involving forecasts for different seasons or lead times. Performance measures and their decompositions are shown to provide quantitative information regarding basic dimensions of forecast quality such as bias, accuracy, calibration (or reliability), discrimination, and skill. Information regarding both distributional and performance characteristics is needed by modelers and forecasters concerned with improving forecast quality. Some implications of these diagnostic methods for verification procedures and practices are discussed.
Abstract
A diagnostic approach to forecast verification is described and illustrated. This approach is based on a general framework for forecast verification. It is “diagnostic” in the sense that it focuses on the fundamental characteristics of the forecasts, the corresponding observations, and their relationship.
Three classes of diagnostic verification methods are identified: 1) the joint distribution of forecasts and observations and conditional and marginal distributions associated with factorizations of this joint distribution; 2) summary measures of these joint, conditional, and marginal distributions; and 3) performance measures and their decompositions. Linear regression models that can be used to describe the relationship between forecasts and observations are also presented. Graphical displays are advanced as a means of enhancing the utility of this body of diagnostic verification methodology.
A sample of National Weather Service maximum temperature forecasts (and observations) for Minneapolis, Minnesota, is analyzed to illustrate the use of this methodology. Graphical displays of the basic distributions and various summary measures are employed to obtain insights into distributional characteristics such as central tendency, variability, and asymmetry. The displays also facilitate the comparison of these characteristics among distributions–for example, between distributions involving forecasts and observations, among distributions involving different types of forecasts, and among distributions involving forecasts for different seasons or lead times. Performance measures and their decompositions are shown to provide quantitative information regarding basic dimensions of forecast quality such as bias, accuracy, calibration (or reliability), discrimination, and skill. Information regarding both distributional and performance characteristics is needed by modelers and forecasters concerned with improving forecast quality. Some implications of these diagnostic methods for verification procedures and practices are discussed.
Abstract
Several spatial forecast verification methods have been developed that are suited for high-resolution precipitation forecasts. They can account for the spatial coherence of precipitation and give credit to a forecast that does not necessarily match the observation at any particular grid point. The methods were grouped into four broad categories (neighborhood, scale separation, features based, and field deformation) for the Spatial Forecast Verification Methods Intercomparison Project (ICP). Participants were asked to apply their new methods to a set of artificial geometric and perturbed forecasts with prescribed errors, and a set of real forecasts of convective precipitation on a 4-km grid. This paper describes the intercomparison test cases, summarizes results from the geometric cases, and presents subjective scores and traditional scores from the real cases.
All the new methods could detect bias error, and the features-based and field deformation methods were also able to diagnose displacement errors of precipitation features. The best approach for capturing errors in aspect ratio was field deformation. When comparing model forecasts with real cases, the traditional verification scores did not agree with the subjective assessment of the forecasts.
Abstract
Several spatial forecast verification methods have been developed that are suited for high-resolution precipitation forecasts. They can account for the spatial coherence of precipitation and give credit to a forecast that does not necessarily match the observation at any particular grid point. The methods were grouped into four broad categories (neighborhood, scale separation, features based, and field deformation) for the Spatial Forecast Verification Methods Intercomparison Project (ICP). Participants were asked to apply their new methods to a set of artificial geometric and perturbed forecasts with prescribed errors, and a set of real forecasts of convective precipitation on a 4-km grid. This paper describes the intercomparison test cases, summarizes results from the geometric cases, and presents subjective scores and traditional scores from the real cases.
All the new methods could detect bias error, and the features-based and field deformation methods were also able to diagnose displacement errors of precipitation features. The best approach for capturing errors in aspect ratio was field deformation. When comparing model forecasts with real cases, the traditional verification scores did not agree with the subjective assessment of the forecasts.
Abstract
Advancements in weather forecast models and their enhanced resolution have led to substantially improved and more realistic-appearing forecasts for some variables. However, traditional verification scores often indicate poor performance because of the increased small-scale variability so that the true quality of the forecasts is not always characterized well. As a result, numerous new methods for verifying these forecasts have been proposed. These new methods can mostly be classified into two overall categories: filtering methods and displacement methods. The filtering methods can be further delineated into neighborhood and scale separation, and the displacement methods can be divided into features based and field deformation. Each method gives considerably more information than the traditional scores, but it is not clear which method(s) should be used for which purpose.
A verification methods intercomparison project has been established in order to glean a better understanding of the proposed methods in terms of their various characteristics and to determine what verification questions each method addresses. The study is ongoing, and preliminary qualitative results for the different approaches applied to different situations are described here. In particular, the various methods and their basic characteristics, similarities, and differences are described. In addition, several questions are addressed regarding the application of the methods and the information that they provide. These questions include (i) how the method(s) inform performance at different scales; (ii) how the methods provide information on location errors; (iii) whether the methods provide information on intensity errors and distributions; (iv) whether the methods provide information on structure errors; (v) whether the approaches have the ability to provide information about hits, misses, and false alarms; (vi) whether the methods do anything that is counterintuitive; (vii) whether the methods have selectable parameters and how sensitive the results are to parameter selection; (viii) whether the results can be easily aggregated across multiple cases; (ix) whether the methods can identify timing errors; and (x) whether confidence intervals and hypothesis tests can be readily computed.
Abstract
Advancements in weather forecast models and their enhanced resolution have led to substantially improved and more realistic-appearing forecasts for some variables. However, traditional verification scores often indicate poor performance because of the increased small-scale variability so that the true quality of the forecasts is not always characterized well. As a result, numerous new methods for verifying these forecasts have been proposed. These new methods can mostly be classified into two overall categories: filtering methods and displacement methods. The filtering methods can be further delineated into neighborhood and scale separation, and the displacement methods can be divided into features based and field deformation. Each method gives considerably more information than the traditional scores, but it is not clear which method(s) should be used for which purpose.
A verification methods intercomparison project has been established in order to glean a better understanding of the proposed methods in terms of their various characteristics and to determine what verification questions each method addresses. The study is ongoing, and preliminary qualitative results for the different approaches applied to different situations are described here. In particular, the various methods and their basic characteristics, similarities, and differences are described. In addition, several questions are addressed regarding the application of the methods and the information that they provide. These questions include (i) how the method(s) inform performance at different scales; (ii) how the methods provide information on location errors; (iii) whether the methods provide information on intensity errors and distributions; (iv) whether the methods provide information on structure errors; (v) whether the approaches have the ability to provide information about hits, misses, and false alarms; (vi) whether the methods do anything that is counterintuitive; (vii) whether the methods have selectable parameters and how sensitive the results are to parameter selection; (viii) whether the results can be easily aggregated across multiple cases; (ix) whether the methods can identify timing errors; and (x) whether confidence intervals and hypothesis tests can be readily computed.