Search Results

You are looking at 1 - 10 of 16,362 items for :

  • Forecast verification x
  • All content x
Clear All
Simon J. Mason and Andreas P. Weigel

idea of forecasts being “correct” or “incorrect” is considered a badly formulated question to start with. The forecaster is therefore tempted to present an array of apologies before presenting any verification statistic. However, the danger then arises of leaving the impression that forecast verification practitioners purposely obfuscate the whole problem to hide the fact that weather and climate forecasts are truly as bad as popularly believed. Unfortunately, as forecasters, we cannot simply

Full access
Jonathan R. Moskaitis

1. Introduction It is common practice in the atmospheric sciences to evaluate a set of forecasts by using scalar summary measures to quantify specific attributes of the quality of the relationship between the forecasts and the corresponding observations (i.e., forecast quality). This approach is known as “measures oriented” verification ( Murphy 1997 ). Familiar examples of summary measures include mean absolute error and mean error, which are used to evaluate accuracy and unconditional bias

Full access
Eric Gilleland, Thomas C. M. Lee, John Halley Gotway, R. G. Bullock, and Barbara G. Brown

1. Introduction A growing interest in quantitative precipitation forecasts (QPF) from industry, agriculture, government, and other sectors has created a demand for more detailed rainfall predictions. Rainfall is one of the most difficult weather elements to predict correctly ( Ebert et al. 2003 ). Traditional verification scores can give misleading or noninformative results because of their inability to distinguish sources of error and their high sensitivity to errors caused by even minor

Full access
Marion Mittermaier and Nigel Roberts

1. Introduction What is a good precipitation forecast? Murphy (1993) describes the general characteristics of a good forecast in terms of consistency, quality (or goodness), and value. In addition, the forecast skill assessed by some measure should be consistent with forecaster judgment and a mismatch may be an indication that the verification score is not performing as it should. We know that higher spatial resolution precipitation forecasts look more realistic ( Mass et al. 2002 ; Done et

Full access
Sim D. Aberson

poststorm best-track intensities if the system is classified in the best track as a tropical or subtropical cyclone at the initial and verifying times. Some intensity forecasts, those in which either the true or forecast cyclone dissipated but the other did not, are therefore not verified. 1 Figure 1 shows intensity forecasts from two such cases: Tropical Storm Stan at 0000 UTC 4 October 2005 was forecast by the various intensity models and OFCL to reach an intensity of between 80 and 90 kt by 48 h

Full access
Hong Guan and Yuejian Zhu

) each ensemble member as a percentile of the climatological distribution. Based on these products, users could build various ANFs, such as greater than 1-, 2-, and 3-sigma standard deviation ANFs for various meteorological elements. Furthermore, by comparing the forecast PDF with the climatological PDF, the users could easily identify an extreme weather event. In this paper, we develop a verification methodology for comparing and evaluating the extreme weather forecast products from the ANF and EFI

Full access
Riccardo Benedetti

1. Introduction The importance of assessing the quality of forecasts is widely recognized by both the numerical weather prediction community and synoptic-empirical meteorologists, who base their predictions on experienced analysis of large-scale atmospheric motions. As a matter of fact, in the absence of a reliable verification scheme, the comparison between different forecast methods as well as the real effectiveness of corrections applied to a given procedure become

Full access
Andreas P. Weigel, Daniel Baggenstos, Mark A. Liniger, Frédéric Vitart, and Christof Appenzeller

skill. It is therefore one of the objectives of this study to define and set up a fully probabilistic verification context that corrects for the problem of intrinsic unreliability without ignoring true (i.e., model induced) reliability deficits, and that allows to jointly consider small hindcast ensembles and large forecast ensembles in the verification. Building upon the work of Vitart (2004) , we then apply this verification approach to systematically assess the skill of probabilistic MOFC near

Full access
Jingzhuo Wang, Jing Chen, Jun Du, Yutao Zhang, Yu Xia, and Guo Deng

error. They further demonstrated that including a model-error representation (stochastic physics) in an EPS can reduce model bias. Rodwell et al. (2016) even tried to understand how systematic and random errors contribute to forecast reliability. However, all these past studies have focused on improving forecast performance, especially reliability, rather than verifying an EPS. The purpose of this study is focusing on how to correctly verify an EPS. Since an EPS is designed to deal with a random

Open access
Eric Gilleland, David Ahijevych, Barbara G. Brown, Barbara Casati, and Elizabeth E. Ebert

1. Introduction Small-scale variability in high-resolution weather forecasts presents a challenging problem for verifying forecast performance. Traditional verification scores provide incomplete information about the quality of a forecast because they only make comparisons on a point-to-point basis with no regard to spatial information [ Baldwin and Kain (2006) ; Casati et al. (2008) ; see Wilks (2005) and Jolliffe and Stephenson (2003) for more on traditional verification scores]. For

Full access