Search Results

You are looking at 1 - 10 of 10 items for

  • Author or Editor: A. Allen Bradley x
  • Refine by Access: All Content x
Clear All Modify Search
A. Allen Bradley and James A. Smith

Abstract

Convective storms are commonplace in the southern plains of the United States. Occasionally, convective storms produce extreme rainfall accumulations, causing streams and rivers to flood. In this paper, we examine the hydrometeorological environment associated with these extreme rainstorms. Datasets used include hourly precipitation data from more than 200 stations, upper-air data, and daily weather maps.

The seasonal distribution of extreme rainstorms in the southern plains has pronounced peaks in late spring and early fall. Moisture availability and convective instability are higher than climatological averages during spring and fall extreme rainstorms, but nearer their averages during summer extreme rainstorms. Although high levels of moisture and convective instability are most common in the summer, the dynamic forcings that can initiate and focus convection are weak. It appears that late spring and early fall are the most likely times for extreme rainstorms because anomalously high levels of moisture and convective instability often encounter strong dynamic forcings.

Full access
A. Allen Bradley and Stuart S. Schwartz

Abstract

Ensemble prediction systems produce forecasts that represent the probability distribution of a continuous forecast variable. Most often, the verification problem is simplified by transforming the ensemble forecast into probability forecasts for discrete events, where the events are defined by one or more threshold values. Then, skill is evaluated using the mean-square error (MSE; i.e., Brier) skill score for binary events, or the ranked probability skill score (RPSS) for multicategory events. A framework is introduced that generalizes this approach, by describing the forecast quality of ensemble forecasts as a continuous function of the threshold value. Viewing ensemble forecast quality this way leads to the interpretation of the RPSS and the continuous ranked probability skill score (CRPSS) as measures of the weighted-average skill over the threshold values. It also motivates additional measures, derived to summarize other features of a continuous forecast quality function, which can be interpreted as descriptions of the function’s geometric shape. The measures can be computed not only for skill, but also for skill score decompositions, which characterize the resolution, reliability, discrimination, and other aspects of forecast quality. Collectively, they provide convenient metrics for comparing the performance of an ensemble prediction system at different locations, lead times, or issuance times, or for comparing alternative forecasting systems.

Full access
Munir A. Nayak, Gabriele Villarini, and A. Allen Bradley

Abstract

Atmospheric rivers (ARs) play a major role in causing extreme precipitation and flooding over the central United States (e.g., Midwest floods of 1993 and 2008). The goal of this study is to characterize rainfall associated with ARs over this region during the Iowa Flood Studies (IFloodS) campaign that took place in April–June 2013. Total precipitation during IFloodS was among the five largest accumulations recorded since the mid-twentieth century over most of this region, with three of the heavy rainfall events associated with ARs. As a preliminary step, the authors evaluate how well different remote sensing–based precipitation products captured the rainfall associated with the ARs and find that stage IV is the product that shows the closest agreement to the reference data. Two of the three ARs during IFloodS occurred within extratropical cyclones, with the moist ascent associated with the presence of cold fronts. In the third AR, mesoscale convective systems resulted in intense rainfall at many locations. In all the three cases, the continued supply of warm water vapor from the tropics and subtropics helped sustain the convective systems. Most of the rainfall during these ARs was concentrated within ~100 km of the AR major axis, and this is the region where the rainfall amounts were highly positively correlated with the vapor transport intensity. Rainfall associated with ARs tends to be larger as these events mature over time. Although no major diurnal variation is detected in the AR occurrences, rainfall amounts during nocturnal ARs were higher than for ARs that occurred during the daytime.

Full access
A. Allen Bradley, Stuart S. Schwartz, and Tempei Hashino

Abstract

For probability forecasts, the Brier score and Brier skill score are commonly used verification measures of forecast accuracy and skill. Using sampling theory, analytical expressions are derived to estimate their sampling uncertainties. The Brier score is an unbiased estimator of the accuracy, and an exact expression defines its sampling variance. The Brier skill score (with climatology as a reference forecast) is a biased estimator, and approximations are needed to estimate its bias and sampling variance. The uncertainty estimators depend only on the moments of the forecasts and observations, so it is easy to routinely compute them at the same time as the Brier score and skill score. The resulting uncertainty estimates can be used to construct error bars or confidence intervals for the verification measures, or perform hypothesis testing.

Monte Carlo experiments using synthetic forecasting examples illustrate the performance of the expressions. In general, the estimates provide very reliable information on uncertainty. However, the quality of an estimate depends on both the sample size and the occurrence frequency of the forecast event. The examples also illustrate that with infrequently occurring events, verification sample sizes of a few hundred forecast–observation pairs are needed to establish that a forecast is skillful because of the large uncertainties that exist.

Full access
A. Allen Bradley, Tempei Hashino, and Stuart S. Schwartz

Abstract

The distributions-oriented approach to forecast verification uses an estimate of the joint distribution of forecasts and observations to evaluate forecast quality. However, small verification data samples can produce unreliable estimates of forecast quality due to sampling variability and biases. In this paper, new techniques for verification of probability forecasts of dichotomous events are presented. For forecasts of this type, simplified expressions for forecast quality measures can be derived from the joint distribution. Although traditional approaches assume that forecasts are discrete variables, the simplified expressions apply to either discrete or continuous forecasts. With the derived expressions, most of the forecast quality measures can be estimated analytically using sample moments of forecasts and observations from the verification data sample. Other measures require a statistical modeling approach for estimation. Results from Monte Carlo experiments for two forecasting examples show that the statistical modeling approach can significantly improve estimates of these measures in many situations. The improvement is achieved mostly by reducing the bias of forecast quality estimates and, for very small sample sizes, by slightly reducing the sampling variability. The statistical modeling techniques are most useful when the verification data sample is small (a few hundred forecast–observation pairs or less), and for verification of rare events, where the sampling variability of forecast quality measures is inherently large.

Full access
A. Allen Bradley, Stuart S. Schwartz, and Tempei Hashino

Abstract

Ensemble streamflow prediction systems produce forecasts in the form of a conditional probability distribution for a continuous forecast variable. A distributions-oriented approach is presented for verification of these probability distribution forecasts. First, a flow threshold is used to transform the ensemble forecast into a probability forecast for a dichotomous event. The event is said to occur if the observed flow is less than or equal to the threshold; the probability forecast is the probability that the event occurs. The distributions-oriented approach, which has been developed for meteorological forecast verification, is then applied to estimate forecast quality measures for a verification dataset. The results are summarized for thresholds chosen to cover the range of possible flow outcomes. To aid in the comparison for different thresholds, relative measures are used to assess forecast quality. An application with experimental forecasts for the Des Moines River basin illustrates the approach. The application demonstrates the added insights on forecast quality gained through this approach, as compared to more traditional ensemble verification approaches. By examining aspects of forecast quality over the range of possible flow outcomes, the distributions-oriented approach facilitates a diagnostic evaluation of ensemble forecasting systems.

Full access
Wei Zhang, Gabriele Villarini, Louise Slater, Gabriel A. Vecchi, and A. Allen Bradley

Abstract

This study assesses the forecast skill of eight North American Multimodel Ensemble (NMME) models in predicting Niño-3/-3.4 indices and improves their skill using Bayesian updating (BU). The forecast skill that is obtained using the ensemble mean of NMME (NMME-EM) shows a strong dependence on lead (initial) month and target month and is quite promising in terms of correlation, root-mean-square error (RMSE), standard deviation ratio (SDRatio), and probabilistic Brier skill score, especially at short lead months. However, the skill decreases in target months from late spring to summer owing to the spring predictability barrier. When BU is applied to eight NMME models (BU-Model), the forecasts tend to outperform NMME-EM in predicting Niño-3/-3.4 in terms of correlation, RMSE, and SDRatio. For Niño-3.4, the BU-Model outperforms NMME-EM forecasts for almost all leads (1–12; particularly for short leads) and target months (from January to December). However, for Niño-3, the BU-Model does not outperform NMME-EM forecasts for leads 7–11 and target months from June to October in terms of correlation and RMSE. Last, the authors test further potential improvements by preselecting “good” models (BU-Model-0.3) and by using principal component analysis to remove the multicollinearity among models, but these additional methodologies do not outperform the BU-Model, which produces the best forecasts of Niño-3/-3.4 for the 2015/16 El Niño event.

Full access
C. Bryan Young, A. Allen Bradley, Witold F. Krajewski, Anton Kruger, and Mark L. Morrissey

Abstract

Next-Generation Weather Radar (NEXRAD) multisensor precipitation estimates will be used for a host of applications that include operational streamflow forecasting at the National Weather Service River Forecast Centers (RFCs) and nonoperational purposes such as studies of weather, climate, and hydrology. Given these expanding applications, it is important to understand the quality and error characteristics of NEXRAD multisensor products. In this paper, the issues involved in evaluating these products are examined through an assessment of a 5.5-yr record of multisensor estimates from the Arkansas–Red Basin RFC. The objectives were to examine how known radar biases manifest themselves in the multisensor product and to quantify precipitation estimation errors. Analyses included comparisons of multisensor estimates based on different processing algorithms, comparisons with gauge observations from the Oklahoma Mesonet and the Agricultural Research Service Micronet, and the application of a validation framework to quantify error characteristics. This study reveals several complications to such an analysis, including a paucity of independent gauge data. These obstacles are discussed and recommendations are made to help to facilitate routine verification of NEXRAD products.

Full access
Benjamin J. Miriovsky, A. Allen Bradley, William E. Eichinger, Witold F. Krajewski, Anton Kruger, Brian R. Nelson, Jean-Dominique Creutin, Jean-Marc Lapetite, Gyu Won Lee, Isztar Zawadzki, and Fred L. Ogden

Abstract

Analysis of data collected by four disdrometers deployed in a 1-km2 area is presented with the intent of quantifying the spatial variability of radar reflectivity at small spatial scales. Spatial variability of radar reflectivity within the radar beam is a key source of error in radar-rainfall estimation because of the assumption that drops are uniformly distributed within the radar-sensing volume. Common experience tells one that, in fact, drops are not uniformly distributed, and, although some work has been done to examine the small-scale spatial variability of rain rates, little experimental work has been done to explore the variability of radar reflectivity. The four disdrometers used for this study include a two-dimensional video disdrometer, an X-band radar-based disdrometer, an impact-type disdrometer, and an optical spectropluviometer. Although instrumental differences were expected, the magnitude of these differences clouds the natural variability of interest. An algorithm is applied to mitigate these instrumental effects, and the variability remains high, even as the observations are integrated in time. Although one cannot explicitly quantify the spatial variability from this experiment, the results clearly show that the spatial variability of reflectivity is very large.

Full access
Richard Rotunno, Leonard J. Pietrafesa, John S. Allen, Bradley R. Colman, Clive M. Dorman, Carl W. Kreitzberg, Stephen J. Lord, Miles G. McPhee, George L. Mellor, Christopher N. K. Mooers, Pearn P. Niiler, Roger A. Pielke Sr., Mark D. Powell, David P. Rogers, James D. Smith, and Lian Xie

U.S. Weather Research Program (USWRP) prospectus development teams (PDTs) are small groups of scientists that are convened by the USWRP lead scientist on a one-time basis to discuss critical issues and to provide advice related to future directions of the program. PDTs are a principal source of information for the Science Advisory Committee, which is a standing committee charged with the duty of making recommendations to the Program Office based upon overall program objectives. PDT-1 focused on theoretical issues, and PDT-2 on observational issues; PDT-3 is the first of several to focus on more specialized topics. PDT-3 was convened to identify forecasting problems related to U.S. coastal weather and oceanic conditions, and to suggest likely solution strategies.

There were several overriding themes that emerged from the discussion. First, the lack of data in and over critical regions of the ocean, particularly in the atmospheric boundary layer, and the upper-ocean mixed layer were identified as major impediments to coastal weather prediction. Strategies for data collection and dissemination, as well as new instrument implementation, were discussed. Second, fundamental knowledge of air–sea fluxes and boundary layer structure in situations where there is significant mesoscale variability in the atmosphere and ocean is needed. Companion field studies and numerical prediction experiments were discussed. Third, research prognostic models suggest that future operational forecast models pertaining to coastal weather will be high resolution and site specific, and will properly treat effects of local coastal geography, orography, and ocean state. The view was expressed that the exploration of coupled air-sea models of the coastal zone would be a particularly fruitful area of research. PDT-3 felt that forecasts of land-impacting tropical cyclones, Great Lakes-affected weather, and coastal cyclogenesis, in particular, would benefit from such coordinated modeling and field efforts. Fourth, forecasting for Arctic coastal zones is limited by our understanding of how sea ice forms. The importance of understanding air-sea fluxes and boundary layers in the presence of ice formation was discussed. Finally, coastal flash flood forecasting via hydrologic models is limited by the present accuracy of measured and predicted precipitation and storm surge events. Strategies for better ways to improve the latter were discussed.

Full access