• Alpert, M., and Raiffa H. , 1982: A progress report on the training of probability assessors. Judgment under Uncertainty: Heuristics and Biases, D. Kahneman, P. Slovic, and A. Tversky, Eds., Cambridge University Press, 294–305.

  • Erev, I., Wallsten T. S. , and Budescu D. V. , 1994: Simultaneous over- and underconfidence: The role of error in judgment processes. Psychol. Rev., 101, 519527.

    • Search Google Scholar
    • Export Citation
  • Gigerenzer, G., and Hoffrage U. , 1995: How to improve Bayesian reasoning without instruction: Frequency formats. Psychol. Rev., 102, 684704.

    • Search Google Scholar
    • Export Citation
  • Gigerenzer, G., Hertwig R. , van den Broek E. , Fasolo F. , and Katsikopoulos K. V. , 2005: A 30% chance of rain tomorrow: How does the public understand probabilistic weather forecasts? Risk Anal., 25, 623630.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., and Savelli S. , 2010: Communicating forecast uncertainty: Public perception of weather forecast uncertainty. Meteor. Appl., 17, 180195.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., and LeClerc J. , 2012: Uncertainty forecasts improve weather-related decisions and attenuate the effects of forecast error. J. Exp. Psychol., 18, 126140, doi:10.1037/a0025185.

    • Search Google Scholar
    • Export Citation
  • Joslyn, S., Nadav-Greenberg L. , and Nichols R. M. , 2009: Probability of precipitation: Assessment and enhancement of end-user understanding. Bull. Amer. Meteor. Soc., 90, 185193.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., and Winkler R. L. , 1974: Credible interval temperature forecasting: Some experimental results. Mon. Wea. Rev., 102, 784794.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., Lichtenstein S. , Fischhoff B. , and Winkler R. L. , 1980: Misinterpretations of precipitation probability forecasts. Bull. Amer. Meteor. Soc., 61, 695701.

    • Search Google Scholar
    • Export Citation
  • National Research Council, 2006: Completing the forecast: Characterizing and communicating uncertainty for better decisions using weather and climate forecasts. National Research Council, 124 pp.

  • Raftery, A. E., Gneiting T. , Balabdaoui F. , and Polakowski M. , 2005: Using Bayesian model averaging to calibrate forecast ensembles. Mon. Wea. Rev., 133, 11551174.

    • Search Google Scholar
    • Export Citation
  • Roulston, M. S., Bolton G. E. , Kleit A. N. , and Sears-Collins A. L. , 2006: A laboratory study of the benefits of including uncertainty information in weather forecasts. Wea. Forecasting, 21, 116122.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 4 4 4
PDF Downloads 2 2 2

The Benefits and Challenges of Predictive Interval Forecasts and Verification Graphics for End Users

View More View Less
  • 1 University of Washington, Seattle, Washington
  • | 2 United States Military Academy, West Point, New York
  • | 3 University of Washington, Seattle, Washington
Restricted access

Abstract

Two behavioral experiments tested the use of predictive interval forecasts and verification graphics by nonexpert end users. Most participants were able to use a simple key to understand a predictive interval graphic, showing a bracket to indicate the upper and lower boundary values of the 80% predictive interval for temperature. In the context of a freeze warning task, the predictive interval forecast narrowed user expectations and alerted participants to the possibility of colder temperatures. As a result, participants using predictive intervals took precautionary action more often than did a control group using deterministic forecasts. Moreover, participants easily understood both deterministic and predictive interval verification graphics, based on simple keys, employing them to correctly identify better performing forecast periods. Importantly, participants with the predictive interval were more likely than those with the deterministic forecast to say they would use that forecast type in the future, demonstrating increased trust. Verification graphics also increased trust in both predictive interval and deterministic forecasts when the effects were isolated from familiarity in the second study. These results suggest that forecasts that include an uncertainty estimate might maintain user trust even when the single-value forecast fails to verify, an effect that may be enhanced by explicit verification data.

Corresponding author address: Susan Joslyn, Department of Psychology, University of Washington, P.O. Box 351525, Seattle, WA 98195. E-mail: susanj@uw.edu

Abstract

Two behavioral experiments tested the use of predictive interval forecasts and verification graphics by nonexpert end users. Most participants were able to use a simple key to understand a predictive interval graphic, showing a bracket to indicate the upper and lower boundary values of the 80% predictive interval for temperature. In the context of a freeze warning task, the predictive interval forecast narrowed user expectations and alerted participants to the possibility of colder temperatures. As a result, participants using predictive intervals took precautionary action more often than did a control group using deterministic forecasts. Moreover, participants easily understood both deterministic and predictive interval verification graphics, based on simple keys, employing them to correctly identify better performing forecast periods. Importantly, participants with the predictive interval were more likely than those with the deterministic forecast to say they would use that forecast type in the future, demonstrating increased trust. Verification graphics also increased trust in both predictive interval and deterministic forecasts when the effects were isolated from familiarity in the second study. These results suggest that forecasts that include an uncertainty estimate might maintain user trust even when the single-value forecast fails to verify, an effect that may be enhanced by explicit verification data.

Corresponding author address: Susan Joslyn, Department of Psychology, University of Washington, P.O. Box 351525, Seattle, WA 98195. E-mail: susanj@uw.edu
Save