Search Results

You are looking at 1 - 10 of 20 items for

  • Author or Editor: Keith F. Brill x
  • Refine by Access: All Content x
Clear All Modify Search
Keith F. Brill

Abstract

Performance measures computed from the 2 × 2 contingency table of outcomes for dichotomous forecasts are sensitive to bias. The method presented here evaluates how the probability of detection (POD) must change as bias changes so that a performance measure improves at a given value of bias. A critical performance ratio (CPR) of the change of POD to the change in bias is derived for a number of performance measures. If a change in POD associated with a bias change satisfies the CPR condition, the performance measure will indicate an improved forecast. If a perfect measure of performance existed, it would always exhibit its optimal value at bias equal to one. Actual measures of performance are susceptible to bias, indicating a better forecast for bias values not equal to one. The CPR is specifically applied to assess the conditions for an improvement toward a more favorable value of several commonly used performance measures as bias increases or decreases through the value one. All performance measures evaluated are found to have quantifiable bias sensitivity. The CPR is applied to analyzing a performance requirement and bias sensitivity in a geometric model.

Full access
Keith F. Brill

Abstract

The gradient wind is defined as a horizontal wind having the same direction as the geostrophic wind but with a magnitude consistent with a balance of three forces: the pressure gradient force, the Coriolis force, and the centrifugal force arising from the curvature of a parcel trajectory. This definition is not sufficient to establish a single way of computing the gradient wind. Different results arise depending upon what is taken to be the parcel trajectory and its curvature. To clarify these distinctions, contour and natural gradient winds are defined and subdivided into steady and nonsteady cases. Contour gradient winds are based only on the geostrophic streamfunction. Natural gradient winds are obtained using the actual wind. Even in cases for which the wind field is available along with the geostrophic streamfunction, it may be useful to obtain the gradient wind for comparison to the existing analyzed or forecast wind or as a force-balanced reference state. It is shown that the nonanomalous (normal) solution in the case of nonsteady natural gradient wind serves as an upper bound for the actual wind speed. Otherwise, supergradient wind speeds are possible, meaning that a contour gradient wind or the steady natural gradient wind used as an approximation for an actual wind may not be capable of representing the full range of actual wind magnitude.

Full access
Keith F. Brill and Fedor Mesinger
Full access
Keith F. Brill and Matthew Pyle

Abstract

Critical performance ratio (CPR) expressions for the eight conditional probabilities associated with the 2 × 2 contingency table of outcomes for binary (dichotomous “yes” or “no”) forecasts are derived. Two are shown to be useful in evaluating the effects of hedging as it approaches random change. The CPR quantifies how the probability of detection (POD) must change as frequency bias changes, so that a performance measure (or conditional probability) indicates an improved forecast for a given value of frequency bias. If yes forecasts were to be increased randomly, the probability of additional correct forecasts (hits) is given by the detection failure ratio (DFR). If the DFR for a performance measure is greater than the CPR, the forecast is likely to be improved by the random increase in yes forecasts. Thus, the DFR provides a benchmark for the CPR in the case of frequency bias inflation. If yes forecasts are decreased randomly, the probability of removing a hit is given by the frequency of hits (FOH). If the FOH for a performance measure is less than the CPR, the forecast is likely to be improved by the random decrease in yes forecasts. Therefore, the FOH serves as a benchmark for the CPR if the frequency bias is decreased. The closer the FOH (DFR) is to being less (greater) than or equal to the CPR, the more likely it may be to enhance the performance measure by decreasing (increasing) the frequency bias. It is shown that randomly increasing yes forecasts for a forecast that is itself better than a randomly generated forecast can improve the threat score but is not likely to improve the equitable threat score. The equitable threat score is recommended instead of the threat score whenever possible.

Full access
Keith F. Brill and Fedor Mesinger

Abstract

Bias-adjusted threat and equitable threat scores were designed to account for the effects of placement errors in assessing the performance of under- or overbiased forecasts. These bias-adjusted performance measures exhibit bias sensitivity. The critical performance ratio (CPR) is the minimum fraction of added forecasts that are correct for a performance measure to indicate improvement if bias is increased. In the opposite case, the CPR is the maximum fraction of removed forecasts that are correct for a performance measure to indicate improvement if bias is decreased. The CPR is derived here for the bias-adjusted threat and equitable threat scores to quantify bias sensitivity relative to several other measures of performance including conventional threat and equitable threat scores. The CPR for a bias-adjusted equitable threat score may indicate the likelihood of preserving or increasing the conventional equitable threat score if forecasts are bias corrected based on past performance.

Full access
Matthew E. Pyle and Keith F. Brill

Abstract

A fair comparison of quantitative precipitation forecast (QPF) products from multiple forecast sources using performance metrics based on a 2 × 2 contingency table with assessment of statistical significance of differences requires accounting for differing frequency biases to which the performance metrics are sensitive. A simple approach to address differing frequency biases modifies the 2 × 2 contingency table values using a mathematical assumption that determines the change in hit rate when the frequency bias is adjusted to unity. Another approach uses quantile mapping to remove the frequency bias of the QPFs by matching the frequency distribution of each QPF to the frequency distribution of the verifying analysis or points. If these two methods consistently yield the same result for assessing the statistical significance of differences between two QPF forecast sources when accounting for bias differences, then verification software can apply the simpler approach and existing 2 × 2 contingency tables can be used for statistical significance computations without recovering the original QPF and verifying data required for the bias removal approach. However, this study provides evidence for continued application and wider adoption of the bias removal approach.

Full access
Mukut B. Mathur, Keith F. Brill, and Charles J. Seman

Abstract

Numerical forecasts from the National Centers for Environmental Prediction’s mesoscale version of the η coordinate–based model, hereafter referred to as MESO, have been analyzed to study the roles of conditional symmetric instability (CSI) and frontogenesis in copious precipitation events. A grid spacing of 29 km and 50 layers are used in the MESO model. Parameterized convective and resolvable-scale condensation, radiation physics, and many other physical processes are included. Results focus on a 24-h forecast from 1500 UTC 1 February 1996 in the region of a low-level front and associated deep baroclinic zone over the southeastern United States. Predicted precipitation amounts were close to the observed, and the rainfall in the model was mainly associated with the resolvable-scale condensation.

During the forecast deep upward motion amplifies in a band oriented west-southwest to east-northeast, nearly parallel to the mean tropospheric thermal wind. This band develops from a sloping updraft in the low-level nearly saturated frontal zone, which is absolutely stable to upright convection, but susceptible to CSI. The updraft is then nearly vertical in the middle troposphere where there is very weak conditional instability. We regard this occurrence as an example of model-produced deep slantwise convection (SWC). Negative values of moist potential vorticity (MPV) occur over the entire low-level SWC area initially. The vertical extent of SWC increases with the lifting upward of the negative MPV area. Characteristic features of CSI and SWC simulated in some high-resolution nonhydrostatic cloud models also develop within the MESO. As in the nonhydrostatic SWC, the vertical momentum transport in the MESO updraft generates a subgeostrophic momentum anomaly aloft, with negative absolute vorticity on the baroclinically cool side of the momentum anomaly where outflow winds are accelerated to the north.

Contribution of various processes to frontogenesis in the SWC area is investigated. The development of indirect circulation leads to low-level frontogenesis through the tilting term. The axis of frontogenesis nearly coincides with the axis of maximum vertical motion when the SWC is fully developed. Results suggest that strong vertical motions in the case investigated develop due to release of symmetric instability in a moist atmosphere (CSI), and resultant circulations lead to weak frontogenesis in the SWC area.

Full access
Keith F. Brill, Anthony R. Fracasso, and Christopher M. Bailey

Abstract

This article explores the potential advantages of using a clustering approach to distill information contained within a large ensemble of forecasts in the medium-range time frame. A divisive clustering algorithm based on the one-dimensional discrete Fourier transformation is described and applied to the 70-member combination of the 20-member National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS) and the 50-member European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble. Cumulative statistical verification indicates that clusters selected objectively based on having the largest number of members do not perform better than the ECMWF ensemble mean. However, including a cluster in a blended forecast to maintain continuity or to nudge toward a preferred solution may be a reasonable strategy in some cases. In such cases, a cluster may be used to sharpen a forecast weakly depicted by the ensemble mean but favored in consideration of continuity, consistency, collaborative thinking, and/or the trend in the guidance. Clusters are often useful for depicting forecast solutions not apparent via the ensemble mean but supported by a subset of ensemble members. A specific case is presented to demonstrate the possible utility of a clustering approach in the forecasting process.

Full access
Jeffrey S. Whitaker, Louis W. Uccellini, and Keith F. Brill

Abstract

A model simulation of the rapid development phase of the Presidents' Day cyclone of 19 February 1979 is analyzed in an effort to complement and extend a diagnostic analysis based only on 12-h radiosonde data over the contiguous United States, with a large data-void area over the Atlantic Ocean (Uccellini et al. 1985). As indicated by the SLP and 850 mb absolute vorticity tendencies, rapid cyclogenesis commences between 0300 and 0600 UTC 19 February and proceeds through the remaining 18 h of the simulation. This rapid development phase occurs as stratospheric air [marked by high values of potential vorticity (PV) approaches and subsequently overlies a separate, lower-tropospheric PV maximum confined to the Fast Coast, or during the period when the advection of PV increases in the middle to upper troposphere over the East Coast. The onset of rapid deepening is marked by 1) the transition in the mass divergence profiles over the surface low from a diffuse pattern with two or three divergence maxima to a two-layer structure, with maximum divergence located near 500 mb and the level of nondivergence located new 700 mb., 2) the intensification of precipitation just north of the surface low pressure system., and 3) an abrupt increase in the low-level vorticity.

Model trajectories and Eulerian analyses indicate that three airstreams converge into the cyclogenetic region during the rapid development phase. One of these airstreams descends within a tropopause fold on the west side of an upper-level trough over the north-central United States on 18 February and approaches the cyclone from the west-southwest as the rapid development commences. A second airstream originates in a region of lower-tropospheric subsidence within the cold anticyclone north of the storm, follows an anticyclonically curved path at low levels over the ocean, and then ascends as it enters the storm from the east. A third airstream approaches the storm from the south at low levels and also ascends as it enters the storm circulation. All of the airstreams pan through the low-level PV maximum as they approach the storm system, with the PV increase following a parcel related to the vertical distribution of θ due to the release of latent heat near the coastal region.

A vorticity analysis shows that absolute vorticity associated with the simulated storm is realized primarily through vortex stretching associated with the convergence of the airstreams below the 700 mb level. Although the maximum vorticity is initially confined below the 700 mb level, the convergence of the various airstreams is shown to be directly related to dynamic and physical processes that extend throughout the entire troposphere. Finally, the divergence of these airstreams within the 700 to 500 mb layer increases the magnitude of the mass divergence just north and cast of the storm center and thus enhances the rapid deepening of the surface low as measured by the decreasing sea level pressure.

Full access
David R. Novak, Keith F. Brill, and Wallace A. Hogsett

Abstract

An objective technique to determine forecast snowfall ranges consistent with the risk tolerance of users is demonstrated. The forecast snowfall ranges are based on percentiles from probability distribution functions that are assumed to be perfectly calibrated. A key feature of the technique is that the snowfall range varies dynamically, with the resultant ranges varying based on the spread of ensemble forecasts at a given forecast projection, for a particular case, for a particular location. Furthermore, this technique allows users to choose their risk tolerance, quantified in terms of the expected false alarm ratio for forecasts of snowfall range. The technique is applied to the 4–7 March 2013 snowstorm at two different locations (Chicago, Illinois, and Washington, D.C.) to illustrate its use in different locations with different forecast uncertainties. The snowfall range derived from the Weather Prediction Center Probabilistic Winter Precipitation Forecast suite is found to be statistically reliable for the day 1 forecast during the 2013/14 season, providing confidence in the practical applicability of the technique.

Full access