Search Results

You are looking at 1 - 3 of 3 items for :

  • Author or Editor: Stéphane Beauregard x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
Guillem Candille, Stéphane Beauregard, and Normand Gagnon


Previous studies have shown that the raw combination (i.e., the combination of the direct output model without any postprocessing procedure) of the National Centers for Environmental Prediction (NCEP) and Meteorological Service of Canada (MSC) ensemble prediction systems (EPS) improves the probabilistic forecast both in terms of reliability and resolution. This combination palliates the lack of reliability of the NCEP EPS because of the too small dispersion of the predicted ensemble and the lack of probabilistic resolution of the MSC EPS. Such a multiensemble, called the North American Ensemble Forecast System (NAEFS), especially shows bias reductions and dispersion improvements that could only come from the combination of different forecast errors. It is then legitimate to wonder whether these improvements in terms of biases and dispersions, and by extension the skill improvements, are only due to the balancing between opposite model errors.

In the NAEFS framework, bias corrections “on the fly,” where the bias is updated over time, are applied to the operational EPSs. Each model of the EPS components (NCEP/MSC) is individually bias corrected against its own analysis with the same process. The bias correction improves the reliability of each EPS component. It also slightly improves the accuracy of the predicted ensembles and thus the probabilistic resolution of the forecasts. Once the EPSs are combined, the improvements due to the bias correction are not so obvious, tending to show that the success of the multiensemble method does not only come from the cancellation of different biases. This study also shows that the combination of the raw EPS components (NAEFS) is generally better than either the bias corrected NCEP or MSC ensembles.

Full access
Laurence J. Wilson, Stephane Beauregard, Adrian E. Raftery, and Richard Verret


Bayesian model averaging (BMA) has recently been proposed as a way of correcting underdispersion in ensemble forecasts. BMA is a standard statistical procedure for combining predictive distributions from different sources. The output of BMA is a probability density function (pdf), which is a weighted average of pdfs centered on the bias-corrected forecasts. The BMA weights reflect the relative contributions of the component models to the predictive skill over a training sample. The variance of the BMA pdf is made up of two components, the between-model variance, and the within-model error variance, both estimated from the training sample. This paper describes the results of experiments with BMA to calibrate surface temperature forecasts from the 16-member Canadian ensemble system. Using one year of ensemble forecasts, BMA was applied for different training periods ranging from 25 to 80 days. The method was trained on the most recent forecast period, then applied to the next day’s forecasts as an independent sample. This process was repeated through the year, and forecast quality was evaluated using rank histograms, the continuous rank probability score, and the continuous rank probability skill score. An examination of the BMA weights provided a useful comparative evaluation of the component models, both for the ensemble itself and for the ensemble augmented with the unperturbed control forecast and the higher-resolution deterministic forecast. Training periods around 40 days provided a good calibration of the ensemble dispersion. Both full regression and simple bias-correction methods worked well to correct the bias, except that the full regression failed to completely remove seasonal trend biases in spring and fall. Simple correction of the bias was sufficient to produce positive forecast skill out to 10 days with respect to climatology, which was improved by the BMA. The addition of the control forecast and the full-resolution model forecast to the ensemble produced modest improvement in the forecasts for ranges out to about 7 days. Finally, BMA produced significantly narrower 90% prediction intervals compared to a simple Gaussian bias correction, while achieving similar overall accuracy.

Full access
Hai Lin, Normand Gagnon, Stephane Beauregard, Ryan Muncaster, Marko Markovic, Bertrand Denis, and Martin Charron


Dynamical monthly prediction at the Canadian Meteorological Centre (CMC) was produced as part of the seasonal forecasting system over the past two decades. A new monthly forecasting system, which has been in operation since July 2015, is set up based on the operational Global Ensemble Prediction System (GEPS). This monthly forecasting system is composed of two components: 1) the real-time forecast, where the GEPS is extended to 32 days every Thursday; and 2) a 4-member hindcast over the past 20 years, which is used to obtain the model climatology to calibrate the monthly forecast. Compared to the seasonal prediction system, the GEPS-based monthly forecasting system takes advantage of the increased model resolution and improved initialization.

Forecasts of the past 2-yr period (2014 and 2015) are verified. Analysis is performed separately for the winter half-year (November–April), and the summer half-year (May–October). Weekly averages of 2-m air temperature (T2m) and 500-hPa geopotential height (Z500) are assessed. For Z500 in the Northern Hemisphere, limited skill can be found beyond week 2 (days 12–18) in summer, while in winter some skill exists over the Pacific and North American region beyond week 2. For T2m in North America, significant skill is found over a large part of the continent all the way to week 4 (days 26–32). The distribution of the wintertime T2m skill in North America is consistent with the influence of the Madden–Julian oscillation, indicating that a significant part of predictability likely comes from the tropics.

Full access