The Performance of Four Global Models over the Australian Region

L. M. Leslie Bureau of Meteorology Research Centre, Melbourne, Victoria, Australia

Search for other papers by L. M. Leslie in
Current site
Google Scholar
PubMed
Close
,
G. D. Hess Bureau of Meteorology Research Centre, Melbourne, Victoria, Australia

Search for other papers by G. D. Hess in
Current site
Google Scholar
PubMed
Close
, and
E. E. Habjan Bureau of Meteorology Research Centre, Melbourne, Victoria, Australia

Search for other papers by E. E. Habjan in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

National weather services now receive global model forecasts from a number of centers around the world. The existence of these forecasts raises the general question of how the operational forecaster can best use the information that the ensemble of predictions provides. The Australian Bureau of Meteorology receives four global model forecasts in real-time, but at present their performance is evaluated almost entirely in a subjective manner.

In this study, in addition to the standard objective measures (for example, bias and rms error), several alternative objective measures of model performance are calculated (such as the temporal forecast consistency of a given model and divergence between different models), in an attempt to provide the forecasters with more effective tools for model assessment. Both kinds of measures are applied to a two-year dataset (October 1989 to September 1991) of daily sea level pressure predictions from the four models.

There are two main outcomes of this study. First, the current subjective system of ranking the various models has been augmented with more objectively based performance measures. Second, these performance statistics provide guidance to the operational forecasters in a number of ways: geographical regions with large systematic errors can be identified for each model; case studies are presented that illustrate the utility of the regional maps of bias, consistency, and divergence computed in this study; and, finally, there are regions of uncertainty where no model is consistently superior, so forecasts over these regions should be treated with caution.

Abstract

National weather services now receive global model forecasts from a number of centers around the world. The existence of these forecasts raises the general question of how the operational forecaster can best use the information that the ensemble of predictions provides. The Australian Bureau of Meteorology receives four global model forecasts in real-time, but at present their performance is evaluated almost entirely in a subjective manner.

In this study, in addition to the standard objective measures (for example, bias and rms error), several alternative objective measures of model performance are calculated (such as the temporal forecast consistency of a given model and divergence between different models), in an attempt to provide the forecasters with more effective tools for model assessment. Both kinds of measures are applied to a two-year dataset (October 1989 to September 1991) of daily sea level pressure predictions from the four models.

There are two main outcomes of this study. First, the current subjective system of ranking the various models has been augmented with more objectively based performance measures. Second, these performance statistics provide guidance to the operational forecasters in a number of ways: geographical regions with large systematic errors can be identified for each model; case studies are presented that illustrate the utility of the regional maps of bias, consistency, and divergence computed in this study; and, finally, there are regions of uncertainty where no model is consistently superior, so forecasts over these regions should be treated with caution.

Save