Panel Discussion on Forecast Verification

(Held by the District of Columbia Branch on December 12, 1951)

Mr. Roger A. Allen Chief, Short Range Forecast Development Section, U. S. Weather Bureau, Washington, D. C.

Search for other papers by Mr. Roger A. Allen in
Current site
Google Scholar
PubMed
Close
,
Mr. Glenn W. Brier Chief, Meteorological Statistics Section, U. S. Weather Bureau, Washington, D. C.

Search for other papers by Mr. Glenn W. Brier in
Current site
Google Scholar
PubMed
Close
,
Mr. Irving I. Gringorten Project Scientist, Atmospheric Analysis Laboratory, Geophysics Research Division, Air Force Cambridge Research Center, Cambridge, Mass.

Search for other papers by Mr. Irving I. Gringorten in
Current site
Google Scholar
PubMed
Close
,
Captain J. C. S. McKillip Officer-in-Charge, Navy Weather Central, Washington, D. C. (Noiv, Staff Cdr. in Chief, JJ. S. Naval Forces, N. E. Atlantic and Mediterranean)

Search for other papers by Captain J. C. S. McKillip in
Current site
Google Scholar
PubMed
Close
,
Mr. Conrad P. Mook Research Forecaster, U. S. Weather Bureau, Washington National Airport, D. C.

Search for other papers by Mr. Conrad P. Mook in
Current site
Google Scholar
PubMed
Close
,
Dr. George P. Wadsworth Professor of Statistics, M.I.T., Cambridge, Mass.

Search for other papers by Dr. George P. Wadsworth in
Current site
Google Scholar
PubMed
Close
, and
Mr. Walter G. Leight Extended Forecast Section, U. S. Weather Bureau, Washington, D. C.

Search for other papers by Mr. Walter G. Leight in
Current site
Google Scholar
PubMed
Close
Full access

* This symposium and discussion was provided with a few key questions that were distributed to the Panel Members and Moderator in advance of the meeting and to the audience at the meeting. Since these questions to some extent set the course of the argument, it may be of interest to reproduce them herewith:

    What Are The Specific Issues?

  • “1. What are the purposes of forecast verification. How can a statement of objective be used in devising a verification system, that is, how does one proceed after selecting a purpose to specify the verification system which will accomplish that purpose?

  • “2. What are the major principles that should be followed in devising a verification system? What are the characteristics of a good scoring system? What major pitfalls exist which might invalidate the result of verification unless special care is used to avoid them?

  • “3. What are the objectives of verifying prognostic charts? Does the verification of prognostic charts present problems which are basically different from those encountered in verifying weather elements directly, and if so, what are the problems? What steps can be taken to verify prognostic charts in terms of the parameters which forecasters use?

  • “4. What is meant by the ‘skill’ of a forecaster ? What forecasting skills need to be measured, both for weather and prognostic charts, for scientific purposes such as providing information for use in forecast improvement? For administrative purposes such as promotion action? Other?

  • “5. What are ‘blind’ forecasts? What are some ways of obtaining ‘blind’ forecasts against which to compare and assess the value of real forecasts? Are there ways of making ‘blind’ forecasts which are suitable for general application to different verification problems? How does one select a ‘zero point’ or ‘no skill’ point for use in verification comparisons ?

  • “6. Must all verification be quantitative and completely objective ? Are useful results likely to be obtained by having forecasts rated by boards of experts? For what types of forecasts and for what purposes would this be desirable? Can qualitative worded forecasts be satisfactorily verified ?

  • “7. Is verification as a means of ‘quality-control’ desirable? How can forecasts be verified and the results used for this purpose? Is competition among forecasters good for the science of meteorology? What dangers exist and what safeguards are needed when competitive forecasts are being made?

  • “8. How can forecasts be verified so as to measure long-term changes in forecast ability, that is over periods of 5 years up to 50 years or more? Is this a desirable goal? Can it be done with forecast records now available?

  • “9. Can allowance be made, in practice, for variable factors which are not uniform for all participating forecasters, such as the varying difficulty of the weather situation, different amounts of time available for making the forecast, and the different emphasis required of the forecaster on some days than on others? How? Can valid systems be devised for comparing forecasters in different regions? How?

  • “10. Should a forecaster always know that his forecasts are being verified? Should he know, before he makes the forecasts, how they are to be verified? What the verification is to be used for?

  • “11. What are the relative advantages and disadvantages of verification of routine operational forecasts as against experimental forecasts? Should a forecaster ever be asked to make two forecasts of the same element at the same time, for two different purposes, such as for operational utility and for verification?

  • “12. Why does so much disagreement exist concerning verification? Why are there so many pet schemes, so much denouncing of existing systems, so little agreement on how forecasts should be verified?”

* This symposium and discussion was provided with a few key questions that were distributed to the Panel Members and Moderator in advance of the meeting and to the audience at the meeting. Since these questions to some extent set the course of the argument, it may be of interest to reproduce them herewith:

    What Are The Specific Issues?

  • “1. What are the purposes of forecast verification. How can a statement of objective be used in devising a verification system, that is, how does one proceed after selecting a purpose to specify the verification system which will accomplish that purpose?

  • “2. What are the major principles that should be followed in devising a verification system? What are the characteristics of a good scoring system? What major pitfalls exist which might invalidate the result of verification unless special care is used to avoid them?

  • “3. What are the objectives of verifying prognostic charts? Does the verification of prognostic charts present problems which are basically different from those encountered in verifying weather elements directly, and if so, what are the problems? What steps can be taken to verify prognostic charts in terms of the parameters which forecasters use?

  • “4. What is meant by the ‘skill’ of a forecaster ? What forecasting skills need to be measured, both for weather and prognostic charts, for scientific purposes such as providing information for use in forecast improvement? For administrative purposes such as promotion action? Other?

  • “5. What are ‘blind’ forecasts? What are some ways of obtaining ‘blind’ forecasts against which to compare and assess the value of real forecasts? Are there ways of making ‘blind’ forecasts which are suitable for general application to different verification problems? How does one select a ‘zero point’ or ‘no skill’ point for use in verification comparisons ?

  • “6. Must all verification be quantitative and completely objective ? Are useful results likely to be obtained by having forecasts rated by boards of experts? For what types of forecasts and for what purposes would this be desirable? Can qualitative worded forecasts be satisfactorily verified ?

  • “7. Is verification as a means of ‘quality-control’ desirable? How can forecasts be verified and the results used for this purpose? Is competition among forecasters good for the science of meteorology? What dangers exist and what safeguards are needed when competitive forecasts are being made?

  • “8. How can forecasts be verified so as to measure long-term changes in forecast ability, that is over periods of 5 years up to 50 years or more? Is this a desirable goal? Can it be done with forecast records now available?

  • “9. Can allowance be made, in practice, for variable factors which are not uniform for all participating forecasters, such as the varying difficulty of the weather situation, different amounts of time available for making the forecast, and the different emphasis required of the forecaster on some days than on others? How? Can valid systems be devised for comparing forecasters in different regions? How?

  • “10. Should a forecaster always know that his forecasts are being verified? Should he know, before he makes the forecasts, how they are to be verified? What the verification is to be used for?

  • “11. What are the relative advantages and disadvantages of verification of routine operational forecasts as against experimental forecasts? Should a forecaster ever be asked to make two forecasts of the same element at the same time, for two different purposes, such as for operational utility and for verification?

  • “12. Why does so much disagreement exist concerning verification? Why are there so many pet schemes, so much denouncing of existing systems, so little agreement on how forecasts should be verified?”

Save