Search Results

You are looking at 1 - 9 of 9 items for

  • Author or Editor: A. Zadra x
  • Refine by Access: All Content x
Clear All Modify Search
A. Zadra
,
R. McTaggart-Cowan
,
P. A. Vaillancourt
,
M. Roch
,
S. Bélair
, and
A.-M. Leduc

Abstract

Deep convection is one of various complex processes driving the evolution of tropical cyclones (TCs). The scales associated with deep convection are too small to be resolved by global NWP models. In the deep convection parameterization used by the Canadian Global Deterministic Prediction System (GDPS), the trigger function depends on various criteria, one of which is the adjustable “trigger velocity” parameter, a vertical velocity threshold used in the parcel stability test of the scheme. In this study, the sensitivity of the GDPS TC activity and precipitation distribution to convective triggering parameters is investigated by varying this threshold. Multiple basins are considered for three TC seasons, and the impacts of trigger velocity variations on TC statistics (forecast hits, bias, false alarms, and track and intensity errors) and on the model’s genesis potential index (GPI) are measured. It is shown that a reduction of the trigger velocity, from 0.05 to 0.01 m s−1, over the tropical oceans leads to increased convective stabilization of atmospheric columns, as well as an increase in convective precipitation amounts but a reduction in total (subgrid plus grid scale) precipitation accumulations. The trigger adjustment also yields a significant reduction of TC false alarm ratios, with no impact on forecast mean errors for true cyclones other than an expected deterioration of the intensity bias, and a systematic reduction of the average GPI over various basins at all lead times. A conceptual model is proposed to explain the relation between trigger adjustments and TC development.

Full access
Frédérick Chosson
,
Paul A. Vaillancourt
,
Jason A. Milbrandt
,
M. K. Yau
, and
Ayrton Zadra

Abstract

Two-moment multiclass microphysics schemes are very promising tools to be used in high-resolution NWP models. However, they must be adapted for coarser resolutions. Here, a twofold solution is proposed—namely, a simple representation of subgrid cloud and precipitation fraction—as well as a microphysical sub-time-stepping method. The scheme is easy to implement, allows supersaturation in ice cloud, and exhibits flexibility for adoption across model grid spacing. It is implemented in the Milbrandt and Yau two-moment microphysics scheme with prognostic precipitation in the context of a simple 1D kinematic model as well as a mesoscale NWP model [the Canadian regional Global Environmental Multiscale model (GEM)]. Sensitivity tests were performed and the results highlighting the advantages and disadvantages of the two-moment multiclass cloud scheme relative to the classical Sundqvist scheme. The respective roles of subgrid cloud fraction, precipitation fraction, and time splitting were also studied. When compared to the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)/CloudSat-retrieved cloud mask, cloud fraction, and ice water content, it is found that the proposed solutions significantly improve the behavior of the Milbrandt and Yau microphysics scheme at the regional NWP scale, suggesting that the subgrid cloud and precipitation fraction technique can be used across model resolutions.

Full access
Ron McTaggart-Cowan
,
Paul A. Vaillancourt
,
Leo Separovic
,
Shawn Corvec
, and
Ayrton Zadra

Abstract

Numerical models that are unable to resolve moist convection in the atmosphere employ physical parameterizations to represent the effects of the associated processes on the resolved-scale state. Most of these schemes are designed to represent the dominant class of cumulus convection that is driven by latent heat release in a conditionally unstable profile with a surplus of convective available potential energy (CAPE). However, an important subset of events occurs in low-CAPE environments in which potential and symmetric instabilities can sustain moist convective motions. Convection schemes that are dependent on the presence of CAPE are unable to depict accurately the effects of cumulus convection in these cases. A mass-flux parameterization is developed to represent such events, with triggering and closure components that are specifically designed to depict subgrid-scale convection in low-CAPE profiles. Case studies show that the scheme eliminates the “bull’s-eyes” in precipitation guidance that develop in the absence of parameterized convection, and that it can represent the initiation of elevated convection that organizes squall-line structure. The introduction of the parameterization leads to significant improvements in the quality of quantitative precipitation forecasts, including a large reduction in the frequency of spurious heavy-precipitation events predicted by the model. An evaluation of surface and upper-air guidance shows that the scheme systematically improves the model solution in the warm season, a result that suggests that the parameterization is capable of accurately representing the effects of moist convection in a range of low-CAPE environments.

Open access
Ron McTaggart-Cowan
,
Paul A. Vaillancourt
,
Ayrton Zadra
,
Leo Separovic
,
Shawn Corvec
, and
Daniel Kirshbaum

Abstract

The parameterization of deep moist convection as a subgrid-scale process in numerical models of the atmosphere is required at resolutions that extend well into the convective “gray zone,” the range of grid spacings over which such convection is partially resolved. However, as model resolution approaches the gray zone, the assumptions upon which most existing convective parameterizations are based begin to break down. We focus here on one aspect of this problem that emerges as the temporal and spatial scales of the model become similar to those of deep convection itself. The common practice of static tendency application over a prescribed adjustment period leads to logical inconsistencies at resolutions approaching the gray zone, while more frequent refreshment of the convective calculations can lead to undesirable intermittent behavior. A proposed parcel-based treatment of convective initiation introduces memory into the system in a manner that is consistent with the underlying physical principles of convective triggering, thus reducing the prevalence of unrealistic gradients in convective activity in an operational model running with a 10 km grid spacing. The subsequent introduction of a framework that considers convective clouds as persistent objects, each possessing unique attributes that describe physically relevant cloud properties, appears to improve convective precipitation patterns by depicting realistic cloud memory, movement, and decay. Combined, this Lagrangian view of convection addresses one aspect of the convective gray zone problem and lays a foundation for more realistic treatments of the convective life cycle in parameterization schemes.

Free access
E. S. Takle
,
J. Roads
,
B. Rockel
,
W. J. Gutowski Jr.
,
R. W. Arritt
,
I. Meinke
,
C. G. Jones
, and
A. Zadra

A new approach, called transferability intercomparisons, is described for advancing both understanding and modeling of the global water cycle and energy budget. Under this approach, individual regional climate models perform simulations with all modeling parameters and parameterizations held constant over a specific period on several prescribed domains representing different climatic regions. The transferability framework goes beyond previous regional climate model intercomparisons to provide a global method for testing and improving model parameterizations by constraining the simulations within analyzed boundaries for several domains. Transferability intercomparisons expose the limits of our current regional modeling capacity by examining model accuracy on a wide range of climate conditions and realizations. Intercomparison of these individual model experiments provides a means for evaluating strengths and weaknesses of models outside their “home domains” (domain of development and testing). Reference sites that are conducting coordinated measurements under the continental-scale experiments under the Global Energy and Water Cycle Experiment (GEWEX) Hydrometeorology Panel provide data for evaluation of model abilities to simulate specific features of the water and energy cycles. A systematic intercomparison across models and domains more clearly exposes collective biases in the modeling process. By isolating particular regions and processes, regional model transferability intercomparisons can more effectively explore the spatial and temporal heterogeneity of predictability. A general improvement of model ability to simulate diverse climates will provide more confidence that models used for future climate scenarios might be able to simulate conditions on a particular domain that are beyond the range of previously observed climates.

Full access
E. S. Takle
,
J. Roads
,
B. Rockel
,
W. J. Gutowski Jr.
,
R. W. Arritt
,
I. Meinke
,
C. G. Jones
, and
A. Zadra
Full access
P. L. Houtekamer
,
Bin He
,
Dominik Jacques
,
Ron McTaggart-Cowan
,
Leo Separovic
,
Paul A. Vaillancourt
,
Ayrton Zadra
, and
Xingxiu Deng

Abstract

An important step in an ensemble Kalman filter (EnKF) algorithm is the integration of an ensemble of short-range forecasts with a numerical weather prediction (NWP) model. A multiphysics approach is used in the Canadian global EnKF system. This paper explores whether the many integrations with different versions of the model physics can be used to obtain more accurate and more reliable probability distributions for the model parameters. Some model parameters have a continuous range of possible values. Other parameters are categorical and act as switches between different parameterizations. In an evolutionary algorithm, the member configurations that contribute most to the quality of the ensemble are duplicated, while adding a small perturbation, at the expense of configurations that perform poorly. The evolutionary algorithm is being used in the migration of the EnKF to a new version of the Canadian NWP model with upgraded physics. The quality of configurations is measured with both a deterministic and an ensemble score, using the observations assimilated in the EnKF system. When using the ensemble score in the evaluation, the algorithm is shown to be able to converge to non-Gaussian distributions. However, for several model parameters, there is not enough information to arrive at improved distributions. The optimized system features slight reductions in biases for radiance measurements that are sensitive to humidity. Modest improvements are also seen in medium-range ensemble forecasts.

Open access
Ron McTaggart-Cowan
,
Leo Separovic
,
Rabah Aider
,
Martin Charron
,
Michel Desgagné
,
Pieter L. Houtekamer
,
Danahé Paquin-Ricard
,
Paul A. Vaillancourt
, and
Ayrton Zadra

Abstract

Accurately representing model-based sources of uncertainty is essential for the development of reliable ensemble prediction systems for NWP applications. Uncertainties in discretizations, algorithmic approximations, and diabatic and unresolved processes combine to influence forecast skill in a flow-dependent way. An emerging approach designed to provide a process-level representation of these potential error sources, stochastically perturbed parameterizations (SPP), is introduced into the Canadian operational Global Ensemble Prediction System. This implementation extends the SPP technique beyond its typical application to free parameters in the physics suite by sampling uncertainty both within the dynamical core and at the formulation level using “error models” when multiple physical closures are available. Because SPP perturbs components within the model, internal consistency is ensured and conservation properties are not affected. The full SPP scheme is shown to increase ensemble spread to keep pace with error growth on a global scale. The sensitivity of the ensemble to each independently perturbed “element” is then assessed, with those responsible for the bulk of the response analyzed in more detail. Perturbations to surface exchange coefficients and the turbulent mixing length have a leading impact on near-surface statistics. Aloft, a tropically focused error model representing uncertainty in the advection scheme is found to initiate growing perturbations on the subtropical jet that lead to forecast improvements at higher latitudes. The results of Part I suggest that SPP has the potential to serve as a reliable representation of model uncertainty for ensemble NWP applications.

Significance Statement

Ensemble systems account for the negative impact that uncertainties in prediction models have on forecasts. Here, uncertain model parameters and algorithms are subjected to perturbations representing impact on forecast errors. By initiating error growth within the model calculations, the equally skillful members of the ensemble remain physically realistic and self-consistent, which is not guaranteed by other depictions of model error. This “stochastically perturbed parameterization” technique (SPP) comprises many small error sources, each analyzed in isolation. Each source is related to a limited set of processes, making it possible to determine how the individual perturbations affect the forecast. We conclude that SPP in the Canadian Global Ensemble Forecasting System produces realistic estimates of the impact of model uncertainties on forecast skill.

Open access
Claude Girard
,
André Plante
,
Michel Desgagné
,
Ron McTaggart-Cowan
,
Jean Côté
,
Martin Charron
,
Sylvie Gravel
,
Vivian Lee
,
Alain Patoine
,
Abdessamad Qaddouri
,
Michel Roch
,
Lubos Spacek
,
Monique Tanguay
,
Paul A. Vaillancourt
, and
Ayrton Zadra

Abstract

The Global Environmental Multiscale (GEM) model is the Canadian atmospheric model used for meteorological forecasting at all scales. A limited-area version now also exists. It is a gridpoint model with an implicit semi-Lagrangian iterative space–time integration scheme. In the “horizontal,” the equations are written in spherical coordinates with the traditional shallow atmosphere approximations and are discretized on an Arakawa C grid. In the “vertical,” the equations were originally defined using a hydrostatic-pressure coordinate and discretized on a regular (unstaggered) grid, a configuration found to be particularly susceptible to noise. Among the possible alternatives, the Charney–Phillips grid, with its unique characteristics, and, as the vertical coordinate, log-hydrostatic pressure are adopted. In this paper, an attempt is made to justify these two choices on theoretical grounds. The resulting equations and their vertical discretization are described and the solution method of what is forming the new dynamical core of GEM is presented, focusing on these two aspects.

Full access