Results are presented from a probability-based weather forecast contest. Rather than evaluating the absolute errors of nonprobabilistic temperature and precipitation forecasts, as is common in other contests, this contest evaluated the skill of specifying probabilities for precipitation amounts and temperature intervals. To forecast optimally for the contest, both accurate forecasts and accurate determination of one's uncertainty about the outcome were necessary. The contest results indicated that forecasters over a range of education levels produced skillful forecasts of temperature and precipitation relative to persistence and climatology. However, in this contest forecasters were not successful in assessing the uncertainty of their maximum or minimum temperatures from day to day, as measured by the correlation of interval width and absolute error. Though previous experiments have shown more optimistic results, the seasonal variation of forecast uncertainty can account for much of the observed correlation, suggesting that day-to-day assessment of forecast uncertainty may be more difficult than previously believed. It is argued that objective methodologies should be developed to quantify uncertainty in forecasts.