Search Results

You are looking at 1 - 9 of 9 items for

  • Author or Editor: Bertrand Denis x
  • All content x
Clear All Modify Search
Bertrand Denis, Jean Côté, and René Laprise

Abstract

For most atmospheric fields, the larger part of the spatial variance is contained in the planetary scales. When examined over a limited area, these atmospheric fields exhibit an aperiodic structure, with large trends across the domain. Trying to use a standard (periodic) Fourier transform on regional domains results in the aliasing of large-scale variance into shorter scales, thus destroying all usefulness of spectra at large wavenumbers. With the objective of solving this particular problem, the authors have evaluated and adopted a spectral transform called the discrete cosine transform (DCT). The DCT is a widely used transform for compression of digital images such as MPEG and JPEG, but its use for atmospheric spectral analysis has not yet received widespread attention.

First, it is shown how the DCT can be employed for producing power spectra from two-dimensional atmospheric fields and how this technique compares favorably with the more conventional technique that consists of detrending the data before applying a periodic Fourier transform. Second, it is shown that the DCT can be used advantageously for extracting information at specific spatial scales by spectrally filtering the atmospheric fields. Examples of applications using data produced by a regional climate model are displayed. In particular, it is demonstrated how the 2D-DCT spectral decomposition is successfully used for calculating kinetic energy spectra and for separating mesoscale features from large scales.

Full access
Ramón de Elía, René Laprise, and Bertrand Denis

Abstract

The fundamental hypothesis underlying the use of limited-area models (LAMs) is their ability to generate meaningful small-scale features from low-resolution information, provided as initial conditions and at their lateral boundaries by a model or by objective analyses. This hypothesis has never been seriously challenged in spite of some reservations expressed by the scientific community. In order to study this hypothesis, a perfect-model approach is followed. A high-resolution large-domain LAM driven by global analyses is used to generate a “reference run.” These fields are filtered afterward to remove small scales in order to mimic a low-resolution run. The same high-resolution LAM, but in a small-domain grid, is nested within these filtered fields and run for several days. Comparison of both runs over the same region allows for the estimation of the ability of the LAM to regenerate the removed small scales.

Results show that the small-domain LAM recreates the right amount of small-scale variability but is incapable of reproducing it with the precision required by a root-mean-square (rms) measure of error. Some success is attained, however, during the first hours of integration. This suggests that LAMs are not very efficient in accurately downscaling information, even in a perfect-model context. On the other hand, when the initial conditions used in the small-domain LAM include the small-scale features that are still absent in the lateral boundary conditions, results improve dramatically. This suggests that lack of high-resolution information in the boundary conditions has a small impact on the performance.

Results of this study also show that predictability timescales of different wavelengths exhibit a behavior similar to those of a global autonomous model.

Full access
Arlan Dirkson, Bertrand Denis, Michael Sigmond, and William J. Merryfield

Abstract

Dynamical forecasting systems are being used to skillfully predict deterministic ice-free and freeze-up date events in the Arctic. This paper extends such forecasts to a probabilistic framework and tests two calibration models to correct systematic biases and improve the statistical reliability of the event dates: trend-adjusted quantile mapping (TAQM) and nonhomogeneous censored Gaussian regression (NCGR). TAQM is a probability distribution mapping method that corrects the forecast for climatological biases, whereas NCGR relates the calibrated parametric forecast distribution to the raw ensemble forecast through a regression model framework. For NCGR, the observed event trend and ensemble-mean event date are used to predict the central tendency of the predictive distribution. For modeling forecast uncertainty, we find that the ensemble-mean event date, which is related to forecast lead time, performs better than the ensemble variance itself. Using a multidecadal hindcast record from the Canadian Seasonal to Interannual Prediction System (CanSIPS), TAQM and NCGR are applied to produce categorical forecasts quantifying the probabilities for early, normal, and late ice retreat and advance. While TAQM performs better than adjusting the raw forecast for mean and linear trend bias, NCGR is shown to outperform TAQM in terms of reliability, skill, and an improved tendency for forecast probabilities to be no worse than climatology. Testing various cross-validation setups, we find that NCGR remains useful when shorter hindcast records (~20 years) are available. By applying NCGR to operational forecasts, stakeholders can be more confident in using seasonal forecasts of sea ice event timing for planning purposes.

Restricted access
René Laprise, Mundakkara Ravi Varma, Bertrand Denis, Daniel Caya, and Isztar Zawadzki

Abstract

This note investigates the nature of the extended predictability commonly attributed to high-resolution limited-area models (LAM) nested with low-resolution data at their lateral boundaries. LAM simulations are performed with two different sets of initial, nesting, and verification data: one is a set of regional objective analyses and the other is a synthetic high-resolution model-generated dataset. The simulation differences (equivalent to forecast errors in an operational framework) are studied in terms of their horizontal scale distribution normalized by the natural variability in each scale, as a measure of predictability, which constitutes an original contribution of this note. The results suggest that the extended predictability in LAM is confined to those scales that are present both in the initial condition and lateral boundary conditions (LBCs). No evidence is found for extended predictability of scales that are not forced through the LBCs. Instead, these smaller scales exhibit predictive timescales in direct relation to their spatial scales, similar to the behavior in autonomous global models.

Full access
Hai Lin, Normand Gagnon, Stephane Beauregard, Ryan Muncaster, Marko Markovic, Bertrand Denis, and Martin Charron

Abstract

Dynamical monthly prediction at the Canadian Meteorological Centre (CMC) was produced as part of the seasonal forecasting system over the past two decades. A new monthly forecasting system, which has been in operation since July 2015, is set up based on the operational Global Ensemble Prediction System (GEPS). This monthly forecasting system is composed of two components: 1) the real-time forecast, where the GEPS is extended to 32 days every Thursday; and 2) a 4-member hindcast over the past 20 years, which is used to obtain the model climatology to calibrate the monthly forecast. Compared to the seasonal prediction system, the GEPS-based monthly forecasting system takes advantage of the increased model resolution and improved initialization.

Forecasts of the past 2-yr period (2014 and 2015) are verified. Analysis is performed separately for the winter half-year (November–April), and the summer half-year (May–October). Weekly averages of 2-m air temperature (T2m) and 500-hPa geopotential height (Z500) are assessed. For Z500 in the Northern Hemisphere, limited skill can be found beyond week 2 (days 12–18) in summer, while in winter some skill exists over the Pacific and North American region beyond week 2. For T2m in North America, significant skill is found over a large part of the continent all the way to week 4 (days 26–32). The distribution of the wintertime T2m skill in North America is consistent with the influence of the Madden–Julian oscillation, indicating that a significant part of predictability likely comes from the tropics.

Full access
Luc Fillion, Monique Tanguay, Ervig Lapalme, Bertrand Denis, Michel Desgagne, Vivian Lee, Nils Ek, Zhuo Liu, Manon Lajoie, Jean-François Caron, and Christian Pagé

Abstract

This paper describes the recent changes to the regional data assimilation and forecasting system at the Canadian Meteorological Center. A major aspect is the replacement of the currently operational global variable resolution forecasting approach by a limited-area nested approach. In addition, the variational analysis code has been upgraded to allow limited-area three- and four-dimensional variational data assimilation (3D- and 4DVAR) analysis approaches. As a first implementation step, the constraints were to impose similar background error correlation modeling assumptions, equal computer resources, and the use of the same assimilated data. Both bi-Fourier and spherical-harmonics spectral representations of background error correlations were extensively tested for the large horizontal domain considered for the Canadian regional system. Under such conditions, it is shown that the new regional data assimilation and forecasting system performs as well as the current operational system and it produces slightly better 24-h accumulated precipitation scores as judged from an ensemble of winter and summer cases. Because of the large horizontal extent of the regional domain considered, a spherical-harmonics spectral representation of background error correlations was shown to perform better than the bi-Fourier representation, considering all evaluation scores examined in this study. The latter is more suitable for smaller domains and will be kept for the upcoming use in the kilometric-scale local analysis domains in order to support the Canadian Meteorological Center’s (CMC’s) operations using multiple domains over Canada. The CMC’s new regional system [i.e., a regional limited-area 3DVAR data assimilation system coupled to a limited-area model (REG-LAM3D)] is now undergoing its final evaluations before operational transfer. Important model and data assimilation upgrades are currently under development to fully exploit this new system and are briefly presented.

Full access
Paul Joe, Chris Doyle, Al Wallace, Stewart G. Cober, Bill Scott, George A. Isaac, Trevor Smith, Jocelyn Mailhot, Brad Snyder, Stephane Belair, Quinton Jansen, and Bertrand Denis
Full access
Julie M. Thériault, Roy Rasmussen, Trevor Smith, Ruping Mo, Jason A. Milbrandt, Melinda M. Brugman, Paul Joe, George A. Isaac, Jocelyn Mailhot, and Bertrand Denis

Abstract

Accurate forecasting of precipitation phase and intensity was critical information for many of the Olympic venue managers during the Vancouver 2010 Olympic and Paralympic Winter Games. Precipitation forecasting was complicated because of the complex terrain and warm coastal weather conditions in the Whistler area of British Columbia, Canada. The goal of this study is to analyze the processes impacting precipitation phase and intensity during a winter weather storm associated with rain and snow over complex terrain. The storm occurred during the second day of the Olympics when the downhill ski event was scheduled. At 0000 UTC 14 February, 2 h after the onset of precipitation, a rapid cooling was observed at the surface instrumentation sites. Precipitation was reported for 8 h, which coincided with the creation of a nearly 0°C isothermal layer, as well as a shift of the valley flow from up valley to down valley. Widespread snow was reported on Whistler Mountain with periods of rain at the mountain base despite the expectation derived from synoptic-scale models (15-km grid spacing) that the strong warm advection would maintain temperatures above freezing. Various model predictions are compared with observations, and the processes influencing the temperature, wind, and precipitation types are discussed. Overall, this case study provided a well-observed scenario of winter storms associated with rain and snow over complex terrain.

Full access
Ben P. Kirtman, Dughong Min, Johnna M. Infanti, James L. Kinter III, Daniel A. Paolino, Qin Zhang, Huug van den Dool, Suranjana Saha, Malaquias Pena Mendez, Emily Becker, Peitao Peng, Patrick Tripp, Jin Huang, David G. DeWitt, Michael K. Tippett, Anthony G. Barnston, Shuhua Li, Anthony Rosati, Siegfried D. Schubert, Michele Rienecker, Max Suarez, Zhao E. Li, Jelena Marshak, Young-Kwon Lim, Joseph Tribbia, Kathleen Pegion, William J. Merryfield, Bertrand Denis, and Eric F. Wood

The recent U.S. National Academies report, Assessment of Intraseasonal to Interannual Climate Prediction and Predictability, was unequivocal in recommending the need for the development of a North American Multimodel Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users.

The multimodel ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation and has proven to produce better prediction quality (on average) than any single model ensemble. This multimodel approach is the basis for several international collaborative prediction research efforts and an operational European system, and there are numerous examples of how this multimodel ensemble approach yields superior forecasts compared to any single model.

Based on two NOAA Climate Test bed (CTB) NMME workshops (18 February and 8 April 2011), a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data are readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (www.cpc.ncep.noaa.gov/products/NMME/). Moreover, the NMME forecast is already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, and presents an overview of the multimodel forecast quality and the complementary skill associated with individual models.

Full access