Search Results

You are looking at 1 - 10 of 19 items for

  • Author or Editor: Vincent Fortin x
  • Refine by Access: All Content x
Clear All Modify Search
Andrew D. Gronewold
and
Vincent Fortin
Full access
Silvia Innocenti
,
Pascal Matte
,
Vincent Fortin
, and
Natacha Bernier

Abstract

Reconstructing tidal signals is indispensable for verifying altimetry products, forecasting water levels, and evaluating long-term trends. Uncertainties in the estimated tidal parameters must be carefully assessed to adequately select the relevant tidal constituents and evaluate the accuracy of the reconstructed water levels. Customary harmonic analysis uses ordinary least squares (OLS) regressions for their simplicity. However, the OLS may lead to incorrect estimations of the regression coefficient uncertainty due to the neglect of the residual autocorrelation. This study introduces two residual resamplings (moving-block and semiparametric bootstraps) for estimating the variability of tidal regression parameters and shows that they are powerful methods to assess the effects of regression errors with nontrivial autocorrelation structures. A Monte Carlo experiment compares their performance to four analytical procedures selected from those provided by the RT_Tide, UTide, and NS_Tide packages and the robustfit.m MATLAB function. In the Monte Carlo experiment, an iteratively reweighted least squares (IRLS) regression is used to estimate the tidal parameters for hourly simulations of one-dimensional water levels. Generally, robustfit.m and the considered RT_Tide method overestimate the tidal amplitude variability, while the selected UTide and NS_Tide approaches underestimate it. After some substantial methodological corrections the selected NS_Tide method shows adequate performance. As a result, estimating the regression variance–covariance with the considered RT_Tide, UTide, and NS_Tide methods may lead to the erroneous selection of constituents and underestimation of water level uncertainty, compromising the validity of their results in some applications.

Significance Statement

At many locations, the production of reliable water level predictions for marine navigation, emergency response, and adaptation to extreme weather relies on the precise modeling of tides. However, the complicated interaction between tides, weather, and other climatological processes may generate large uncertainties in tidal predictions. In this study, we investigate how different statistical methods may lead to different quantification of tidal model uncertainty when using data with completely known properties (e.g., knowing the tidal signal, as well as the amount and structure of noise). The main finding is that most commonly used statistical methods may estimate incorrectly the uncertainty in tidal parameters and predictions. This inconsistency is due to some specific simplifying assumptions underlying the analysis and may be reduced using statistical techniques based on data resampling.

Open access
Mabrouk Abaza
,
François Anctil
,
Vincent Fortin
, and
Richard Turcotte
Full access
Mabrouk Abaza
,
François Anctil
,
Vincent Fortin
, and
Richard Turcotte

Abstract

Meteorological ensemble prediction systems (M-EPS) are generally set up at lower resolution than for their deterministic counterparts. Operational hydrologists are thus more prone to selecting deterministic meteorological forecasts for driving their hydrological models. Limited-area implementation of meteorological models may become a convenient way of providing the sought after higher-resolution meteorological ensemble forecasts. This study aims to compare the Canadian operational global EPS (M-GEPS) and the experimental regional EPS (M-REPS) for short-term operational hydrological ensemble forecasting over eight watersheds, for which performance and reliability was assessed. Higher-resolution deterministic forecasts were also available for the study. Results showed that both M-EPS provided better performance than their deterministic counterparts when comparing their mean continuous ranked probability score (MCRPS) and mean absolute error (MAE), especially beyond a 24-h horizon. The global and regional M-EPS led to very similar performance in terms of RMSE, but the latter produced a larger spread and improved reliability. The M-REPS was deemed superior to its operational global counterpart, especially for its ability to better depict forecast uncertainty.

Full access
Andrew D. Gronewold
,
Vincent Fortin
,
Robert Caldwell
, and
James Noel

Abstract

Monitoring, understanding, and forecasting the hydrologic cycle of large freshwater basins often requires a broad suite of data and models. Many of these datasets and models, however, are susceptible to variations in monitoring infrastructure and data dissemination protocols when watershed, political, and jurisdictional boundaries do not align. Reconciling hydrometeorological monitoring gaps and inconsistencies across the international Laurentian Great Lakes–St. Lawrence River basin is particularly challenging because of its size and because the basin’s dominant hydrologic feature is the vast surface waters of the Great Lakes.

For tens of millions of Canadian and U.S. residents that live within the Great Lakes basin, seamless binational datasets are needed to better understand and predict coastal water-level fluctuations and other conditions that could potentially threaten human and environmental health. Binational products addressing this need have historically been developed and maintained by the Coordinating Committee on Great Lakes Basic Hydraulic and Hydrologic Data (Coordinating Committee). The Coordinating Committee recently held its one-hundredth semiannual meeting and reflected on a range of historical accomplishments while setting goals for future work. This article provides a synthesis of those achievements and goals. Particularly significant legacy and recently developed datasets of the Coordinating Committee include historical Great Lakes surface water elevations, basin-scale tributary inflow to the Great Lakes, and basin-scale estimates of both over-lake and over-land precipitation. Moving forward, members of the Coordinating Committee will work toward customizing state-of-the-art hydrologic and meteorological forecasting systems across the entire Great Lakes basin and toward promoting their products and protocols as templates for successful binational coordination across other large binational freshwater basins.

Full access
Franck Lespinas
,
Vincent Fortin
,
Guy Roy
,
Peter Rasmussen
, and
Tricia Stadnyk

Abstract

This paper presents an assessment of the operational system used by the Meteorological Service of Canada for producing near-real-time precipitation analyses over North America. The Canadian Precipitation Analysis (CaPA) system optimally combines available surface observations with numerical weather prediction (NWP) output in order to produce estimates of precipitation on a 15-km grid at each synoptic hour (0000, 0600, 1200, and 1800 UTC). The validation protocol used to assess the quality of the CaPA has demonstrated the usefulness of the system for producing reliable estimates of precipitation over Canada, even in areas with few or no weather stations. The CaPA is found to be better in autumn, spring, and winter than in summer. This is because of the difficulty in correctly producing convective precipitation in the NWP because of the low spatial resolution of the meteorological model. An investigation of the quality of the precipitation analyses in the 15 terrestrial ecozones of Canada indicates the need to have a sufficient number of observations (at least ~1.17 stations per 10 000 km2) in order to produce a precipitation analysis that is significantly better than the raw NWP product. Improvements of the CaPA system by including provincial networks as well as radar and satellite information are expected in the future.

Full access
Daniel Deacu
,
Vincent Fortin
,
Erika Klyszejko
,
Christopher Spence
, and
Peter D. Blanken

Abstract

The paper presents the incremental improvement of the prediction of the Great Lakes net basin supply (NBS) with the hydrometeorological model Modélisation Environmentale–Surface et Hydrologie (MESH) by increasing the accuracy of the simulated NBS components (overlake precipitation, lake evaporation, and runoff into the lake). This was achieved through a series of experiments with MESH and its parent numerical weather prediction model [the Canadian Global Environmental Multiscale model in its regional configuration (GEM Regional)]. With forcing extracted from operational GEM Regional forecasts, MESH underestimated the NBS in fall and winter. The underestimation increased when the GEM precipitation was replaced with its corrected version provided by the Canadian Precipitation Analysis. This pointed to overestimated lake evaporation and prompted the revision of the parameterization of the surface turbulent fluxes over water used both in MESH and GEM. The revised parameterization was validated against turbulent fluxes measured at a point on Lake Superior. Its use in MESH reduced the lake evaporation and largely corrected the NBS underestimation. However, the Lake Superior NBS became overestimated, signaling an inconsistency between the reduced lake evaporation and the prescribed precipitation. To remove the inconsistency, a new forcing dataset (including precipitation) was generated with the GEM model using the revised flux parameterization. A major NBS simulation improvement was obtained with the new atmospheric forcing reflecting the atmospheric response to the modified surface fluxes over the lakes. Additional improvements resulted by correcting the runoff with a modified snowmelt rate and by insertion of observed streamflows. The study shows that accurate lake evaporation simulation is crucial for accurate NBS prediction.

Full access
Zuohao Cao
,
Murray D. Mackay
,
Christopher Spence
, and
Vincent Fortin

Abstract

Sensible and latent heat fluxes over Lake Superior are computed using a variational approach with a Bowen ratio constraint and inputs of 7 years of half-hourly temporal resolution observations of hydrometeorological variables over the lake. In an advancement from previous work focusing on the sensible heat flux, in this work computations of the latent heat flux are required so that a new physical constraint of the Bowen ratio is introduced. Verifications are made possible for fluxes predicted by a Canadian operational coupled atmosphere–ocean model due to recent availabilities of observed and model-predicted fluxes over Lake Superior. The observed flux data with longer time periods and higher temporal resolution than those used in previous studies allows for the examination of detailed performances in computing these fluxes. Evaluations utilizing eddy-covariance measurements over Lake Superior show that the variational method yields higher correlations between computed and measured sensible and latent heat fluxes than a flux-gradient method. The variational method is more accurate than the flux-gradient method in computing these fluxes at annual, monthly, daily, and hourly time scales. Under both unstable and stable conditions, the variational method considerably reduces mean absolute errors produced by the flux-gradient approach in computing the fluxes. It is demonstrated with 2 months of data that the variational method obtains higher correlation coefficients between the observed and the computed sensible and latent heat fluxes than the coupled model predicted, and yields lower mean absolute errors than the coupled model. Furthermore, comparisons are made between the coupled-model-predicted fluxes and the fluxes computed based on three buoy observations over Lake Superior.

Full access
Dikra Khedhaouiria
,
Stéphane Bélair
,
Vincent Fortin
,
Guy Roy
, and
Franck Lespinas

Abstract

Consistent and continuous fields provided by precipitation analyses are valuable for hydrometeorological applications and land data assimilation modeling, among others. Providing uncertainty estimates is a logical step in the analysis development, and a consistent approach to reach this objective is the production of an ensemble analysis. In the present study, a 6-h High-Resolution Ensemble Precipitation Analysis (HREPA) was developed for the domain covering Canada and the northern part of the contiguous United States. The data assimilation system is the same as the Canadian Precipitation Analysis (CaPA) and is based on optimal interpolation (OI). Precipitation from the Canadian national 2.5-km atmospheric prediction system constitutes the background field of the analysis, while at-site records and radar quantitative precipitation estimates (QPE) compose the observation datasets. By using stochastic perturbations, multiple observations and background field random realizations were generated to subsequently feed the data assimilation system and provide 24 HREPA members plus one control run. Based on one summer and one winter experiment, HREPA capabilities in terms of bias and skill were verified against at-site observations for different climatic regions. The results indicated HREPA’s reliability and skill for almost all types of precipitation events in winter, and for precipitation of medium intensity in summer. For both seasons, HREPA displayed resolution and sharpness. The overall good performance of HREPA and the lack of ensemble precipitation analysis (PA) at such spatiotemporal resolution in the literature motivate further investigations on transitional seasons and more advanced perturbation approaches.

Open access
Gonzalo Leonardini
,
François Anctil
,
Vincent Vionnet
,
Maria Abrahamowicz
,
Daniel F. Nadeau
, and
Vincent Fortin

Abstract

The Soil, Vegetation, and Snow (SVS) land surface model was recently developed at Environment and Climate Change Canada (ECCC) for operational numerical weather prediction and hydrological forecasting. This study examined the performance of the snow scheme in the SVS model over multiple years at 10 well-instrumented sites from the Earth System Model–Snow Model Intercomparison Project (ESM-SnowMIP), which covers alpine, maritime, and taiga climates. The SVS snow scheme is a simple single-layer snowpack scheme that uses the force–restore method. Stand-alone, point-scale verification tests showed that the model is able to realistically reproduce the main characteristics of the snow cover at these sites, namely, snow water equivalent, density, snow depth, surface temperature, and albedo. SVS accurately simulated snow water equivalent, density, and snow depth at open sites, but exhibited lower performance for subcanopy snowpacks (forested sites). The lower performance was attributed mainly to the limitations of the compaction scheme and the absence of a snow interception scheme. At open sites, the SVS snow surface temperatures were well represented but exhibited a cold bias, which was due to poor representation at night. SVS produced a reasonably accurate representation of snow albedo, but there was a tendency to overestimate late winter albedo. Sensitivity tests suggested improvements associated with the snow melting formulation in SVS.

Full access