Search Results

You are looking at 1 - 6 of 6 items for :

  • Regional effects x
  • Spatial Forecast Verification Methods Inter-Comparison Project (ICP) x
  • All content x
Clear All
Heini Wernli, Christiane Hofmann, and Matthias Zimmer

objects in the observational data and model forecast (which should be available on the identical grid), and eventually the calculation of the three components of SAL according to the equations given in section 2 of WPHF (see also section 2a below). So far, SAL has been applied to synthetic precipitation fields and a large set of operational QPFs from the nonhydrostatic regional model from the Consortium for Small-Scale Modeling (COSMO) with a horizontal resolution of 7 km for the Elbe catchment

Full access
Christopher A. Davis, Barbara G. Brown, Randy Bullock, and John Halley-Gotway

indicated by A09 , the traditional GSS values were typically around 0.1. One reason for the difference in the two sets of values is that spatial overlap of forecast and observed objects is not required to achieve a positive score in the object-based GSS, but such overlap is essential in the traditional application of the GSS. Regional performance of both models was also examined using the object-based GSS. We divided the forecast domain into nine regions ( Fig. 9 ) based on tertiles of the

Full access
Jason E. Nachamkin

the convective cores. The case-to-case error growth tended to slow once the diagonal translations exceeded 12 grid points or 48 km, suggesting a limit to the error sensitivity to the CBD. At larger box sizes the CBD traces diverged as the translated precipitation forecasts corrected the composite bias at various distances. Again the CBD was reduced overall at large box sizes due to the effects of correct no-precipitation forecasts. Case VI exhibited the worst performance at small box sizes due to

Full access
Valliappa Lakshmanan and John S. Kain

results in this paper, unless explicitly stated otherwise, all use γ = 1. The need for, and the effects of, this intensity correction can be illustrated by using the artificial dataset shown in Fig. 2 . Without intensity correction (see Fig. 2b ), the GMM fit simply tries to get all the nonzero pixel locations correct, and the resulting GMM fit is simply a symmetric ellipse. With low values of γ (see Fig. 2c ), because there are many more low-intensity pixels than high

Full access
Caren Marzban, Scott Sandgathe, Hilary Lyons, and Nicholas Lederer

short, no particularly simple pattern can be inferred from these nine dates; apart from the possibility that wrf4ncep is generally more variable than wrf2caps or wrf4ncar. A comparison of the reflectivity forecasts from these three models, across 30 days, has been performed by Marzban et al. (2008) . 5. VGM results The effects of a global shift on a variogram can be seen in the top two panels of Fig. 6 . Consider the case of a 50 gridpoint shift, first (geom001). Evidently, the only region where

Full access
Eric Gilleland, David Ahijevych, Barbara G. Brown, Barbara Casati, and Elizabeth E. Ebert

developing new verification methods is to obtain diagnostic information about location errors. For example, is the forecast basically correct, but missing the spatial target by x kilometers? Are there systematic errors in the forecast locations of storms? The bias-corrected ETS (BCETS) answers these questions indirectly by accounting for the effects of bias on the GSS (i.e., ETS) so that the only remaining influence is the placement of the forecast. The method does not provide specific information on

Full access