The most commonly used measures for verifying forecasts or simulators of continuous variables are root-mean-squared error (rmse) and anomaly correlation. Some disadvantages of these measures are demonstrated. Existing assessment systems for categorical forecasts are discussed briefly. An alternative unbiased verification measure is developed, known as the linear error in probability space (LEPS) score. The LEPS scare may be used to assess forecasts of both continuous and categorical variables and has some advantages over rmse and anomaly correlation. The properties of the version of LEPS discussed here are reviewed and compared with an earlier form of LEPS. A skill-score version of LEPS may be used to obtain an overall measure of the skill of a number of forecasts. This skill score is biased, but the bias is negligible if the number of effectively independent forecasts or simulations is large. Some examples are given in which the LEPS skill score is compared with rmse and anomaly correlation.