Search Results

You are looking at 1 - 10 of 23 items for

  • Author or Editor: Michael Scheuerer x
  • Refine by Access: All Content x
Clear All Modify Search
Michael Scheuerer
and
Thomas M. Hamill

Abstract

Enhancements of multivariate postprocessing approaches are presented that generate statistically calibrated ensembles of high-resolution precipitation forecast fields with physically realistic spatial and temporal structures based on precipitation forecasts from the Global Ensemble Forecast System (GEFS). Calibrated marginal distributions are obtained with a heteroscedastic regression approach using censored, shifted gamma distributions. To generate spatiotemporal forecast fields, a new variant of the recently proposed minimum divergence Schaake shuffle technique, which selects a set of historic dates in such a way that the associated analysis fields have marginal distributions that resemble the calibrated forecast distributions, is proposed. This variant performs univariate postprocessing at the forecast grid scale and disaggregates these coarse-scale precipitation amounts to the analysis grid by deriving a multiplicative adjustment function and using it to modify the historic analysis fields such that they match the calibrated coarse-scale precipitation forecasts. In addition, an extension of the ensemble copula coupling (ECC) technique is proposed. A mapping function is constructed that maps each raw ensemble forecast field to a high-resolution forecast field such that the resulting downscaled ensemble has the prescribed marginal distributions. A case study over an area that covers the Russian River watershed in California is presented, which shows that the forecast fields generated by the two new techniques have a physically realistic spatial structure. Quantitative verification shows that they also represent the distribution of subgrid-scale precipitation amounts better than the forecast fields generated by the standard Schaake shuffle or the ECC-Q reordering approaches.

Full access
Thomas M. Hamill
and
Michael Scheuerer

Abstract

Characteristics of the European Centre for Medium-Range Weather Forecast’s (ECMWF’s) 0000 UTC diagnosed 2-m temperatures (T 2m) from 4D-Var and global ensemble forecasts initial conditions were examined in 2018 over the contiguous United States at 1/2° grid spacing. These were compared against independently generated, upscaled high-resolution T 2m analyses that were created with a somewhat novel data assimilation methodology, an extension of classical optimal interpolation (OI) to surface data analysis. The analysis used a high-resolution, spatially detailed climatological background and was statistically unbiased. Differences of the ECMWF 4D-Var T 2m initial states from the upscaled OI reference were decomposed into a systematic component and a residual component. The systematic component was determined by applying a temporal smoothing to the time series of differences between the ECMWF T 2m analyses and the OI analyses. Systematic errors at 0000 UTC were commonly 1 K or more and larger in the mountainous western United States, with the ECMWF analyses cooler than the reference. The residual error is regarded as random in character and should be statistically consistent with the spread of the ensemble of initial conditions after inclusion of OI analysis uncertainty. This analysis uncertainty was large in the western United States, complicating interpretation. There were some areas suggestive of an overspread initial ensemble, with others underspread. Assimilation of more observations in the reference OI analysis would reduce analysis uncertainty, facilitating more conclusive determination of initial-condition ensemble spread characteristics.

Full access
Thomas M. Hamill
and
Michael Scheuerer

Abstract

Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.

Full access
Michael Scheuerer
and
Thomas M. Hamill

Abstract

Proper scoring rules provide a theoretically principled framework for the quantitative assessment of the predictive performance of probabilistic forecasts. While a wide selection of such scoring rules for univariate quantities exists, there are only few scoring rules for multivariate quantities, and many of them require that forecasts are given in the form of a probability density function. The energy score, a multivariate generalization of the continuous ranked probability score, is the only commonly used score that is applicable in the important case of ensemble forecasts, where the multivariate predictive distribution is represented by a finite sample. Unfortunately, its ability to detect incorrectly specified correlations between the components of the multivariate quantity is somewhat limited. In this paper the authors present an alternative class of proper scoring rules based on the geostatistical concept of variograms. The sensitivity of these variogram-based scoring rules to incorrectly predicted means, variances, and correlations is studied in a number of examples with simulated observations and forecasts; they are shown to be distinctly more discriminative with respect to the correlation structure. This conclusion is confirmed in a case study with postprocessed wind speed forecasts at five wind park locations in Colorado.

Full access
Michael Scheuerer
and
Thomas M. Hamill

Abstract

A parametric statistical postprocessing method is presented that transforms raw (and frequently biased) ensemble forecasts from the Global Ensemble Forecast System (GEFS) into reliable predictive probability distributions for precipitation accumulations. Exploratory analysis based on 12 years of reforecast data and ⅛° climatology-calibrated precipitation analyses shows that censored, shifted gamma distributions can well approximate the conditional distribution of observed precipitation accumulations given the ensemble forecasts. A nonhomogeneous regression model is set up to link the parameters of this distribution to ensemble statistics that summarize the mean and spread of predicted precipitation amounts within a certain neighborhood of the location of interest, and in addition the predicted mean of precipitable water. The proposed method is demonstrated with precipitation reforecasts over the conterminous United States using common metrics such as Brier skill scores and reliability diagrams. It yields probabilistic forecasts that are reliable, highly skillful, and sharper than the previously demonstrated analog procedure. In situations with limited predictability, increasing the size of the neighborhood within which ensemble forecasts are considered as predictors can further improve forecast skill. It is found, however, that even a parametric postprocessing approach crucially relies on the availability of a sufficiently large training dataset.

Full access
Michael Scheuerer
and
Thomas M. Hamill

Abstract

Forecast uncertainty associated with the prediction of snowfall amounts is a complex superposition of the uncertainty about precipitation amounts and the uncertainty about weather variables like temperature that influence the snow-forming process. In situations with heavy precipitation, parametric, regression-based postprocessing approaches often perform very well since they can extrapolate relations between forecast and observed precipitation amounts established with data from more common events. The complexity of the relation between temperature and snowfall amounts, on the other hand, makes nonparametric techniques like the analog method an attractive choice. In this article we show how these two different methodologies can be combined in a way that leverages the respective advantages. Predictive distributions of precipitation amounts are obtained using a heteroscedastic regression approach based on censored, shifted gamma distributions, and quantile forecasts derived from them are used together with ensemble forecasts of temperature to find analog dates where both quantities were similar. The observed snowfall amounts on these dates are then used to compose an ensemble that represents the uncertainty about future snowfall. We demonstrate this approach with reforecast data from the Global Ensemble Forecast System (GEFS) and snowfall analyses from the National Operational Hydrologic Remote Sensing Center (NOHRSC) over an area within the northeastern United States and an area within the U.S. mountain states.

Full access
Thomas M. Hamill
and
Michael Scheuerer

Abstract

This is the second part of a series on benchmarking raw 1-h high-resolution numerical weather prediction surface-temperature forecasts from NOAA’s High-Resolution Rapid Refresh (HRRR) system. Such 1-h forecasts are commonly used to underpin the background for an hourly updated surface temperature analysis. The benchmark in this article was produced through a gridded statistical interpolation procedure using only surface observations and a diurnally, seasonally dependent gridded surface temperature climatology. The temporally varying climatologies were produced by synthesizing high-resolution monthly gridded climatologies of daily maximum and minimum temperatures over the contiguous United States with yearly and diurnally dependent estimates of the station-based climatologies of surface temperature. To produce a 1-h benchmark forecast, for a given hour of the day, say 0000 UTC, the gridded climatology was interpolated to station locations and then subtracted from the observations. These station anomalies were statistically interpolated to produce the 0000 UTC gridded anomaly. This anomaly pattern was continued for 1 h and added to the 0100 UTC gridded climatology to generate the 0100 UTC gridded benchmark forecast. The benchmark is thus a simple 1-h persistence of the analyzed deviations from the diurnally dependent climatology. Using a cross-validation procedure with July 2015 and August 2018 data, the gridded benchmark provided competitive, relatively unbiased 1-h surface temperature forecasts relative to the HRRR. Benchmark forecasts were lower in error and bias in 2015, but the HRRR system was highly competitive or better than the gridded benchmark in 2018. Implications of the benchmarking results are discussed, as well as potential applications of the simple benchmarking procedure to data assimilation.

Free access
Joseph Bellier
,
Michael Scheuerer
, and
Thomas M. Hamill

Abstract

Downscaling precipitation fields is a necessary step in a number of applications, especially in hydrological modeling where the meteorological forcings are frequently available at too coarse resolution. In this article, we review the Gibbs sampling disaggregation model (GSDM), a stochastic downscaling technique originally proposed by Gagnon et al. The method is capable of introducing realistic, weather-dependent, and possibly anisotropic fine-scale details, while preserving the mean rain rate over the coarse-scale pixels. The main developments compared to the former version are (i) an adapted Gibbs sampling algorithm that enforces the downscaled fields to have a similar texture to that of the analysis fields, (ii) an extensive test of various meteorological predictors for controlling specific aspects of the texture such as the anisotropy and the spatial variability, and (iii) a review of the regression equations used in the model for defining the conditional distributions. A perfect-model experiment is conducted over a domain in the southeastern United States. The metrics used for verification are based on the concept of gridded, stratified variogram, which is introduced as an effective way of reproducing the abilities of human eyes for detecting differences in the field texture. Results indicate that the best overall performances are obtained with the most sophisticated, predictor-based GSDM variant. The 600-hPa wind is found to be the best year-round predictor for controlling the anisotropy. For the spatial variability, kinematic predictors such as wind shear are found to be best during the convective periods, while instability indices are more informative elsewhere.

Free access
Kira Feldmann
,
Michael Scheuerer
, and
Thordis L. Thorarinsdottir

Abstract

Statistical postprocessing techniques are commonly used to improve the skill of ensembles from numerical weather forecasts. This paper considers spatial extensions of the well-established nonhomogeneous Gaussian regression (NGR) postprocessing technique for surface temperature and a recent modification thereof in which the local climatology is included in the regression model to permit locally adaptive postprocessing. In a comparative study employing 21-h forecasts from the Consortium for Small Scale Modelling ensemble predictive system over Germany (COSMO-DE), two approaches for modeling spatial forecast error correlations are considered: a parametric Gaussian random field model and the ensemble copula coupling (ECC) approach, which utilizes the spatial rank correlation structure of the raw ensemble. Additionally, the NGR methods are compared to both univariate and spatial versions of the ensemble Bayesian model averaging (BMA) postprocessing technique.

Full access
Joseph Bellier
,
Brett Whitin
,
Michael Scheuerer
,
James Brown
, and
Thomas M. Hamill

Abstract

In the postprocessing of ensemble forecasts of weather variables, it is standard practice to first calibrate the forecasts in a univariate setting, before reconstructing multivariate ensembles that have a correct covariability in space, time, and across variables, via so-called “reordering” methods. Within this framework though, postprocessors cannot fully extract the skill of the raw forecast that may exist at larger scales. A multi-temporal-scale modulation mechanism for precipitation is here presented, which aims at improving the forecasts over different accumulation periods, and which can be coupled with any univariate calibration and multivariate reordering techniques. The idea, originally known under the term “canonical events,” has been implemented for more than a decade in the Meteorological Ensemble Forecast Processor (MEFP), a component of the U.S. National Weather Service’s (NWS) Hydrologic Ensemble Forecast Service (HEFS), although users were left with material in the gray literature. This paper proposes a formal description of the mechanism and studies its intrinsic connection with the multivariate reordering process. The verification of modulated and unmodulated forecasts, when coupled with two popular methods for reordering, the Schaake shuffle and ensemble copula coupling (ECC), is performed on 11 Californian basins, on both precipitation and streamflow. Results demonstrate the clear benefit of the multi-temporal-scale modulation, in particular on multiday total streamflow. However, the relative gain depends on the method used for reordering, with more benefits expected when this latter method is not able to reconstruct an adequate temporal structure on the calibrated precipitation forecasts.

Free access