Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: Gregory E. Gahrs x
  • All content x
Clear All Modify Search
John R. Lanzante and Gregory E. Gahrs

Abstract

A temporal sampling bias may be introduced due to the inability of a measurement system to produce a valid observation during certain types of situations. In this study the temporal sampling bias in satellite-derived measures of upper-tropospheric humidity (UTH) was examined through the utilization of similar humidity measures derived from radiosonde data. This bias was estimated by imparting the temporal sampling characteristics of the satellite system onto the radiosonde observations. This approach was applied to UTH derived from Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder radiances from the NOAA-10 satellite from the period 1987–91 and from the “Angell” network of 63 radiosonde stations for the same time period. Radiative modeling was used to convert both the satellite and radiosonde data to commensurate measures of UTH.

Examination of the satellite temporal sampling bias focused on the effects of the “clear-sky bias” due to the inability of the satellite system to produce measurements when extensive cloud cover is present. This study indicates that the effects of any such bias are relatively small in the extratropics (about several percent relative humidity) but may be ∼5%–10% in the most convectively active regions in the Tropics. Furthermore, there is a systematic movement and evolution of the bias pattern following the seasonal migration of convection, which reflects the fact that the bias increases as cloud cover increases. The bias is less noticeable for shorter timescales (seasonal values) but becomes more obvious as the averaging time increases (climatological values); it may be that small-scale noise partially obscures the bias for shorter time averages. Based on indirect inference it is speculated that the bias may lead to an underestimate of the magnitude of trends in satellite UTH in the Tropics, particularly in the drier regions.

Full access
Gregory E. Gahrs, Scott Applequist, Richard L. Pfeffer, and Xu-Feng Niu

Abstract

As a follow-up to a recent paper by the authors in which various methodologies for probabilistic quantitative precipitation forecasting were compared, it is shown here that the skill scores for linear regression and logistic regression can be improved by the use of alternative methods to obtain the model order and the coefficients of the predictors. Moreover, it is found that an even simpler, and more computationally efficient, methodology, called binning, yields Brier skill scores that are comparable to those of logistic regression. The Brier skill scores for both logistic regression and binning are found to be significantly higher at the 99% confidence level than the ones for linear regression.

In response to questions that have arisen concerning the significance test used in the authors' previous study, an alternative method for determining the confidence level is used in this study and it is found that it yields results comparable to those obtained previously, thereby lending support to the conclusion that logistic regression is significantly more skillful than linear regression.

Full access
Scott Applequist, Gregory E. Gahrs, Richard L. Pfeffer, and Xu-Feng Niu

Abstract

Twenty-four-hour probabilistic quantitative precipitation forecasts (PQPFs) for accumulations exceeding thresholds of 0.01, 0.05, and 0.10 in. are produced for 154 meteorological stations over the eastern and central regions of the United States. Comparisons of skill are made among forecasts generated using five different linear and nonlinear statistical methodologies, namely, linear regression, discriminant analysis, logistic regression, neural networks, and a classifier system. The predictors for the different statistical models were selected from a large pool of analyzed and predicted variables generated by the Nested Grid Model (NGM) during the four cool seasons (December–March) from 1992/93 to 1995/96. Because linear regression is the current method used by the National Weather Service, it is chosen as the benchmark by which the other methodologies are compared. The results indicate that logistic regression performs best among all methodologies. Most notable is that it performs significantly better at the 99% confidence limits than linear regression, attaining Brier skill scores of 0.413, 0.480, and 0.478 versus 0.378, 0.440, and 0.457 for linear regression, at thresholds of 0.01, 0.05, and 0.10 in., respectively. Attributes diagrams reveal that linear regression gives a greater number of forecast probabilities closer to climatology than does logistic regression at all three thresholds. Moreover, these forecasts are more biased toward lower-than-observed probabilities and are further from the “perfect reliability” line in almost all probability categories than are the forecasts made by logistic regression. For the other methodologies, the classifier system also showed significantly greater skill than did linear regression, and discriminant analysis and neural networks gave mixed results.

Full access