Abstract
This paper compares a number of probabilistic weather forecasting verification approaches. Forecasting skill scores from linear error in probability space and relative operating characteristics are compared with results from an alternative approach that first transforms probabilistic forecasts to yes/no form and then assesses the model forecasting skill. This approach requires a certain departure between the categorical probability from forecast models and its random expectation. The classical contingency table is revised to reflect the “nonapplicable” forecasts in the skill assessment.
The authors present a verification of an Australian seasonal rainfall forecast model hindcasts for the winter and summer seasons over the period from 1900 to 1995. Overall skill scores from different approaches demonstrate similar features. However there are advantages and disadvantages in each of those approaches. Using more than one skill assessment scheme is necessary and is also of practical value in the evaluation of the model forecasts and their applications.
Corresponding author address: Dr. H. Zhang, Bureau of Meteorology Research Centre, GPO Box 1289K, VIC 3001, Australia.
Email: h.zhang@bom.gov.au