Search Results

You are looking at 1 - 10 of 10 items for :

  • Spatial Forecast Verification Methods Inter-Comparison Project (ICP) x
  • All content x
Clear All
Christian Keil and George C. Craig

and observations and compare their properties. Object-oriented techniques are quite intuitive and effective when the features are well defined and can be associated between the forecast and observations. Examples are the techniques of Ebert and McBride (2000) and Davis et al. (2006) . (iv) Field verification techniques use optical flow algorithms to compare fields without decomposing them into separate elements or scales. The term optical flow stems from the image-processing community where

Full access
Christopher A. Davis, Barbara G. Brown, Randy Bullock, and John Halley-Gotway

this sense, MODE can be considered a rudimentary algorithm for image processing and image matching, but developed for meteorological applications. The degree of similarity between forecast and observed objects provides a measure of forecast quality. The philosophy behind the development of MODE has been to develop a procedure that mimics what a human expert would do to find features and decide whether a given feature in a forecast represents an analogous feature in the observations. The decision

Full access
Eric Gilleland, Johan Lindström, and Finn Lindgren

section 2c . Finally, a ranking algorithm is proposed in section 2d . a. Stochastic model We denote the (undeformed) forecast and verification fields as 𝗙 and 𝗢, respectively. The stochastic model is then given by where D is the entire domain of the verification and (undeformed) forecast fields; W takes coordinates, s = ( x , y ), from the deformed image field, denoted here by ( s ) = 𝗙[ W ( s )], and maps them to coordinates in the undeformed field; and ɛ( s ) are random errors

Full access
Elizabeth E. Ebert and William A. Gallus Jr.

judgments about what constitutes a feature and what constitutes a good match. Incorrect or inappropriate matches may be made by the automated algorithm, and will lead to the wrong conclusions about forecast quality. A goal of this investigation is therefore to investigate the quality of the matches. When a good match is achieved, what can be learned about the forecast error? When the match is judged imperfect, how does this impact the interpretation of the errors? Section 2 gives an overview of the CRA

Full access
Caren Marzban, Scott Sandgathe, Hilary Lyons, and Nicholas Lederer

NC, one obtains a “CSI curve,” which effectively summarizes the forecast quality as a function of scale. As an illustration of the technique, the top panel in Fig. 1 shows an example of partitioning a joint observed–forecast field into 100 clusters. The clustering algorithm used to generate the clusters in Fig. 1 is the aforementioned k-means algorithm; in its simplest form, it assumes that the clusters are elliptical in shape. The algorithm used in the CA method begins with the result of the

Full access
Valliappa Lakshmanan and John S. Kain

perturbed cases drawn from Ahijevych et al. (2009) and Kain et al. (2008) and make suggestions for further work in section 3 . 2. Fitting a GMM Fitting a GMM to an image for the purposes of forecast verification consists of the following steps: (i) initialize the GMM ( section 2c ), (ii) carry out the expectation-minimization (EM) algorithm to iteratively “tune” the GMM ( section 2b ), (iii) store the parameters of each Gaussian component of the GMM ( section 2d ), and (iv) compute the

Full access
Eric Gilleland, David Ahijevych, Barbara G. Brown, Barbara Casati, and Elizabeth E. Ebert

then thresholded. Once contiguous nonzero pixels (i.e., features) are identified, they are merged and matched by an algorithm utilizing information about various attributes (e.g., centroid position, total area, area overlap, intensity distribution, orientation angle, and boundary separation). Gilleland et al. (2008) propose an alternative method for merging and matching features for MODE based solely on a binary image distance measure, known as Baddeley’s Δ metric. Another simple option is the

Full access
Keith F. Brill and Fedor Mesinger

degree of difficulty involved in achieving such bias-corrected forecasts. This assertion is speculative, requiring that the dHdA method approximate the effects of the systematic error removal algorithm used to perform the bias correction. Figure 3a shows the ETS (histogram bars), ETS CPR (lines), and bias (symbols) for the National Centers for Environmental Prediction (NCEP) North American Mesoscale (NAM) model along with the same for the NCEP Global Forecast System (GFS) for 24-h QPFs at the 24-h

Full access
Elizabeth E. Ebert

simplest assumption and makes for easy implementation of the methodology. In principle, a Gaussian or other kernel could be used to give greater weight to the central values, as suggested by Roberts and Lean (2008) . 2 Some weather features, such as squall lines, fronts, and topographically forced weather, would be better represented using neighborhoods that reflect their shape. However, this is difficult to implement in a general purpose algorithm, and the “scale” would be less clearly defined than

Full access
David Ahijevych, Eric Gilleland, Barbara G. Brown, and Elizabeth E. Ebert

be quantified. Keil and Craig (2009) use a pyramidal matching algorithm to derive displacement vector fields and compute a score based on displacement and amplitude (DAS). For geom001 the displacement component dominates the DAS as expected, but for geom002 the amplitude component dominates the DAS because the features are farther apart than the search radius. Optical flow techniques behave similarly. A small displacement error such as in geom001 has a trivial optical flow field that simply

Full access