Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: Harald Daan x
  • Refine by Access: All Content x
Clear All Modify Search
Harald Daan

Abstract

In the practice of forecast verification, the results of applying scoring rules appear to depend on the way the predictand is classified. This paper contains an examination of the sensitivity of six scoring rules to the classification. The approach is purely theoretical, in a sense that a Gaussian model for both forecasts and observations is designed. Scoring results for this model an calculated for different scoring rules and different classifications.

The results appear to favor the Ranked Probability Score (RPS), which is almost insensitive to the classification. Further, categorical scoring rules show a better performance in this respect than probabilistic scoring rules, except for the RPS. The use of the other three scoring rules (for probability forecasts) should not be recommended for the verification of forecasts of ordered predictands; that is, in case the classification involves more than two classes.

Full access
Allan H. Murphy and Harald Daan

Abstract

Subjective probability forecasts of wind speed, visibility and precipitation events for six-hour periods have been prepared on an experimental basis by forecasters at Zierikzec in The Netherlands since October 1980. Results from the first year of the experiment were encouraging, but they revealed a substantial amount of overforecasting (i.e., a strong tendency for forecast probabilities to exceed observed relative frequencies) for all events, periods and forecasters. Moreover, this overforecasting was reflected in a rapid deterioration in the skill of the forecast as a function of lead time. In October 1981 the forecasters were given extensive feedback concerning their individual and collective performance during the first year of the experimental program. The purpose of this paper is to compare the results of the first and second years of the experiment.

Evaluation of the forecasts formulated in the fist and second years of the Zierikzee experiment reveals marked improvements in reliability (i.e., reductions in overforecasting) from year 1 to year 2, both overall and for most stratifications of the results by event, period or forecaster. For example, the reliability of the forecasts increased for all events and periods and for three of the four forecasters. The improvements in reliability are reflected in substantial increases in the skill of the forecasts from year 1 to year 2, with overall skill scores for the second (first) year for the wind speed, visibility and precipitation forecasts of 25.4% (13.9%), 22.4% (12.4%) and 0.5% (−24.7%), respectively. These improvements in performance are attributed to the feedback provided to the forecasters at the beginning of the second year of the experiment and to the experience in probability forecasting gained by the forecasters during the first year of the program.

The paper concludes with a brief discussion of the results and their implications for probability forecasting in meteorology.

Full access
Thomas Peterson, Harald Daan, and Philip Jones

To monitor the world's climate adequately, scientists need data from the “best” climate stations exchanged internationally on a real-time basis. To make this vision a reality, a global surface reference climatological station network is in the process of being established through the Global Climate Observing System (GCOS). To initially select stations to be considered for inclusion in this GCOS Surface Network, a methodology was developed to rank and compare land surface weather observing stations from around the world from a climate perspective and then select the best stations in each region that would create an evenly distributed network. This initial selection process laid the groundwork for and facilitates the subsequent review by World Meteorological Organization member countries, which will be an important step in establishing the GCOS Surface Network.

Full access