Search Results
You are looking at 1 - 10 of 12 items for
- Author or Editor: Ayrton Zadra x
- Refine by Access: All Content x
Abstract
Turbulence in the planetary boundary layer (PBL) transports heat, momentum, and moisture in eddies that are not resolvable by current NWP systems. Numerical models typically parameterize this process using vertical diffusion operators whose coefficients depend on the intensity of the expected turbulence. The PBL scheme employed in this study uses a one-and-a-half-order closure based on a predictive equation for the turbulent kinetic energy (TKE). For a stably stratified fluid, the growth and decay of TKE is largely controlled by the dynamic stability of the flow as represented by the Richardson number. Although the existence of a critical Richardson number that uniquely separates turbulent and laminar regimes is predicted by linear theory and perturbation analysis, observational evidence and total energy arguments suggest that its value is highly uncertain. This can be explained in part by the apparent presence of turbulence regime-dependent critical values, a property known as Richardson number hysteresis. In this study, a parameterization of Richardson number hysteresis is proposed. The impact of including this effect is evaluated in systems of increasing complexity: a single-column model, a forecast case study, and a full assimilation cycle. It is shown that accounting for a hysteretic loop in the TKE equation improves guidance for a canonical freezing rain event by reducing the diffusive elimination of the warm nose aloft, thus improving the model’s representation of PBL profiles. Systematic enhancements in predictive skill further suggest that representing Richardson number hysteresis in PBL schemes using higher-order closures has the potential to yield important and physically relevant improvements in guidance quality.
Abstract
Turbulence in the planetary boundary layer (PBL) transports heat, momentum, and moisture in eddies that are not resolvable by current NWP systems. Numerical models typically parameterize this process using vertical diffusion operators whose coefficients depend on the intensity of the expected turbulence. The PBL scheme employed in this study uses a one-and-a-half-order closure based on a predictive equation for the turbulent kinetic energy (TKE). For a stably stratified fluid, the growth and decay of TKE is largely controlled by the dynamic stability of the flow as represented by the Richardson number. Although the existence of a critical Richardson number that uniquely separates turbulent and laminar regimes is predicted by linear theory and perturbation analysis, observational evidence and total energy arguments suggest that its value is highly uncertain. This can be explained in part by the apparent presence of turbulence regime-dependent critical values, a property known as Richardson number hysteresis. In this study, a parameterization of Richardson number hysteresis is proposed. The impact of including this effect is evaluated in systems of increasing complexity: a single-column model, a forecast case study, and a full assimilation cycle. It is shown that accounting for a hysteretic loop in the TKE equation improves guidance for a canonical freezing rain event by reducing the diffusive elimination of the warm nose aloft, thus improving the model’s representation of PBL profiles. Systematic enhancements in predictive skill further suggest that representing Richardson number hysteresis in PBL schemes using higher-order closures has the potential to yield important and physically relevant improvements in guidance quality.
Abstract
A diagnostic algorithm, based on the empirical normal mode decomposition technique, is proposed as a diagnostic tool in studies of the atmospheric variability. It begins by analyzing the transient eddies in terms of empirical modes that are orthogonal with respect to wave activities. Time-dependent amplitudes together with wave activity spectra are used to classify the modes and compute their propagation properties.
The algorithm is applied to a sequence of four Northern Hemisphere winters taken from the National Centers for Environmental Prediction reanalyses, with a focus on the upper troposphere and lower stratosphere, giving a set of empirical modes of wind, pressure, specific volume, and potential vorticity. Results indicate that most of the wave activity is carried by large-scale, eastward-propagating modes centered at middle and high latitudes. Some properties of the leading modes, such as their average phase speeds, are in good agreement with the predictions of linear dynamics.
Characteristics of the leading wavenumber-5 mode, such as its dipolar pressure pattern near the summer hemisphere tropopause, its propagation speed of 12 m s−1 and decay rate of 3 days, can be explained by the theory of quasi modes, defined as superpositions of singular modes sharply peaked in the phase speed domain. Other large-scale, midlatitude modes also show properties compatible with the quasi-modal description, suggesting that quasi modes play an important role in the upper-troposphere dynamics.
Abstract
A diagnostic algorithm, based on the empirical normal mode decomposition technique, is proposed as a diagnostic tool in studies of the atmospheric variability. It begins by analyzing the transient eddies in terms of empirical modes that are orthogonal with respect to wave activities. Time-dependent amplitudes together with wave activity spectra are used to classify the modes and compute their propagation properties.
The algorithm is applied to a sequence of four Northern Hemisphere winters taken from the National Centers for Environmental Prediction reanalyses, with a focus on the upper troposphere and lower stratosphere, giving a set of empirical modes of wind, pressure, specific volume, and potential vorticity. Results indicate that most of the wave activity is carried by large-scale, eastward-propagating modes centered at middle and high latitudes. Some properties of the leading modes, such as their average phase speeds, are in good agreement with the predictions of linear dynamics.
Characteristics of the leading wavenumber-5 mode, such as its dipolar pressure pattern near the summer hemisphere tropopause, its propagation speed of 12 m s−1 and decay rate of 3 days, can be explained by the theory of quasi modes, defined as superpositions of singular modes sharply peaked in the phase speed domain. Other large-scale, midlatitude modes also show properties compatible with the quasi-modal description, suggesting that quasi modes play an important role in the upper-troposphere dynamics.
Abstract
An algorithm based on the empirical normal mode analysis is used in a comparative study of the climatology and variability in dynamical-core experiments of the Global Environmental Multiscale model. The algorithm is proposed as a means to assess properties of the model's dynamical core and to establish objective criteria for model intercomparison studies. In this paper, the analysis is restricted to the upper troposphere and lower stratosphere. Two dynamical-core experiments are considered: one with the forcing proposed by Held and Suarez, later modified by Williamson et al. (called HSW experiment), and the other with a forcing inspired by the prescriptions of Boer and Denis (BD). Results are also compared with those of an earlier diagnosis of NCEP reanalyses. Normal modes and wave-activity spectra are similar to those found in the reanalysis data, although details depend on the forcing. For instance, wave-energy amplitudes are higher with the BD forcing, and an approximate energy equipartition is observed in the spectrum of wavenumber-1 modes in the NCEP data and the BD experiment but not in the HSW experiment. The HSW forcing has a relatively strong relaxation acting on the complete temperature field, whereas the BD forcing only acts on the zonal-mean temperature, letting the internal dynamics alone drive the wave-activity spectral cascade. This difference may explain why the BD forcing is more successful in reproducing the observed wave activity in the upper troposphere and lower stratosphere.
Abstract
An algorithm based on the empirical normal mode analysis is used in a comparative study of the climatology and variability in dynamical-core experiments of the Global Environmental Multiscale model. The algorithm is proposed as a means to assess properties of the model's dynamical core and to establish objective criteria for model intercomparison studies. In this paper, the analysis is restricted to the upper troposphere and lower stratosphere. Two dynamical-core experiments are considered: one with the forcing proposed by Held and Suarez, later modified by Williamson et al. (called HSW experiment), and the other with a forcing inspired by the prescriptions of Boer and Denis (BD). Results are also compared with those of an earlier diagnosis of NCEP reanalyses. Normal modes and wave-activity spectra are similar to those found in the reanalysis data, although details depend on the forcing. For instance, wave-energy amplitudes are higher with the BD forcing, and an approximate energy equipartition is observed in the spectrum of wavenumber-1 modes in the NCEP data and the BD experiment but not in the HSW experiment. The HSW forcing has a relatively strong relaxation acting on the complete temperature field, whereas the BD forcing only acts on the zonal-mean temperature, letting the internal dynamics alone drive the wave-activity spectral cascade. This difference may explain why the BD forcing is more successful in reproducing the observed wave activity in the upper troposphere and lower stratosphere.
Abstract
Two-moment multiclass microphysics schemes are very promising tools to be used in high-resolution NWP models. However, they must be adapted for coarser resolutions. Here, a twofold solution is proposed—namely, a simple representation of subgrid cloud and precipitation fraction—as well as a microphysical sub-time-stepping method. The scheme is easy to implement, allows supersaturation in ice cloud, and exhibits flexibility for adoption across model grid spacing. It is implemented in the Milbrandt and Yau two-moment microphysics scheme with prognostic precipitation in the context of a simple 1D kinematic model as well as a mesoscale NWP model [the Canadian regional Global Environmental Multiscale model (GEM)]. Sensitivity tests were performed and the results highlighting the advantages and disadvantages of the two-moment multiclass cloud scheme relative to the classical Sundqvist scheme. The respective roles of subgrid cloud fraction, precipitation fraction, and time splitting were also studied. When compared to the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)/CloudSat-retrieved cloud mask, cloud fraction, and ice water content, it is found that the proposed solutions significantly improve the behavior of the Milbrandt and Yau microphysics scheme at the regional NWP scale, suggesting that the subgrid cloud and precipitation fraction technique can be used across model resolutions.
Abstract
Two-moment multiclass microphysics schemes are very promising tools to be used in high-resolution NWP models. However, they must be adapted for coarser resolutions. Here, a twofold solution is proposed—namely, a simple representation of subgrid cloud and precipitation fraction—as well as a microphysical sub-time-stepping method. The scheme is easy to implement, allows supersaturation in ice cloud, and exhibits flexibility for adoption across model grid spacing. It is implemented in the Milbrandt and Yau two-moment microphysics scheme with prognostic precipitation in the context of a simple 1D kinematic model as well as a mesoscale NWP model [the Canadian regional Global Environmental Multiscale model (GEM)]. Sensitivity tests were performed and the results highlighting the advantages and disadvantages of the two-moment multiclass cloud scheme relative to the classical Sundqvist scheme. The respective roles of subgrid cloud fraction, precipitation fraction, and time splitting were also studied. When compared to the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)/CloudSat-retrieved cloud mask, cloud fraction, and ice water content, it is found that the proposed solutions significantly improve the behavior of the Milbrandt and Yau microphysics scheme at the regional NWP scale, suggesting that the subgrid cloud and precipitation fraction technique can be used across model resolutions.
Abstract
Numerical models that are unable to resolve moist convection in the atmosphere employ physical parameterizations to represent the effects of the associated processes on the resolved-scale state. Most of these schemes are designed to represent the dominant class of cumulus convection that is driven by latent heat release in a conditionally unstable profile with a surplus of convective available potential energy (CAPE). However, an important subset of events occurs in low-CAPE environments in which potential and symmetric instabilities can sustain moist convective motions. Convection schemes that are dependent on the presence of CAPE are unable to depict accurately the effects of cumulus convection in these cases. A mass-flux parameterization is developed to represent such events, with triggering and closure components that are specifically designed to depict subgrid-scale convection in low-CAPE profiles. Case studies show that the scheme eliminates the “bull’s-eyes” in precipitation guidance that develop in the absence of parameterized convection, and that it can represent the initiation of elevated convection that organizes squall-line structure. The introduction of the parameterization leads to significant improvements in the quality of quantitative precipitation forecasts, including a large reduction in the frequency of spurious heavy-precipitation events predicted by the model. An evaluation of surface and upper-air guidance shows that the scheme systematically improves the model solution in the warm season, a result that suggests that the parameterization is capable of accurately representing the effects of moist convection in a range of low-CAPE environments.
Abstract
Numerical models that are unable to resolve moist convection in the atmosphere employ physical parameterizations to represent the effects of the associated processes on the resolved-scale state. Most of these schemes are designed to represent the dominant class of cumulus convection that is driven by latent heat release in a conditionally unstable profile with a surplus of convective available potential energy (CAPE). However, an important subset of events occurs in low-CAPE environments in which potential and symmetric instabilities can sustain moist convective motions. Convection schemes that are dependent on the presence of CAPE are unable to depict accurately the effects of cumulus convection in these cases. A mass-flux parameterization is developed to represent such events, with triggering and closure components that are specifically designed to depict subgrid-scale convection in low-CAPE profiles. Case studies show that the scheme eliminates the “bull’s-eyes” in precipitation guidance that develop in the absence of parameterized convection, and that it can represent the initiation of elevated convection that organizes squall-line structure. The introduction of the parameterization leads to significant improvements in the quality of quantitative precipitation forecasts, including a large reduction in the frequency of spurious heavy-precipitation events predicted by the model. An evaluation of surface and upper-air guidance shows that the scheme systematically improves the model solution in the warm season, a result that suggests that the parameterization is capable of accurately representing the effects of moist convection in a range of low-CAPE environments.
Abstract
The parameterization of deep moist convection as a subgrid-scale process in numerical models of the atmosphere is required at resolutions that extend well into the convective “gray zone,” the range of grid spacings over which such convection is partially resolved. However, as model resolution approaches the gray zone, the assumptions upon which most existing convective parameterizations are based begin to break down. We focus here on one aspect of this problem that emerges as the temporal and spatial scales of the model become similar to those of deep convection itself. The common practice of static tendency application over a prescribed adjustment period leads to logical inconsistencies at resolutions approaching the gray zone, while more frequent refreshment of the convective calculations can lead to undesirable intermittent behavior. A proposed parcel-based treatment of convective initiation introduces memory into the system in a manner that is consistent with the underlying physical principles of convective triggering, thus reducing the prevalence of unrealistic gradients in convective activity in an operational model running with a 10 km grid spacing. The subsequent introduction of a framework that considers convective clouds as persistent objects, each possessing unique attributes that describe physically relevant cloud properties, appears to improve convective precipitation patterns by depicting realistic cloud memory, movement, and decay. Combined, this Lagrangian view of convection addresses one aspect of the convective gray zone problem and lays a foundation for more realistic treatments of the convective life cycle in parameterization schemes.
Abstract
The parameterization of deep moist convection as a subgrid-scale process in numerical models of the atmosphere is required at resolutions that extend well into the convective “gray zone,” the range of grid spacings over which such convection is partially resolved. However, as model resolution approaches the gray zone, the assumptions upon which most existing convective parameterizations are based begin to break down. We focus here on one aspect of this problem that emerges as the temporal and spatial scales of the model become similar to those of deep convection itself. The common practice of static tendency application over a prescribed adjustment period leads to logical inconsistencies at resolutions approaching the gray zone, while more frequent refreshment of the convective calculations can lead to undesirable intermittent behavior. A proposed parcel-based treatment of convective initiation introduces memory into the system in a manner that is consistent with the underlying physical principles of convective triggering, thus reducing the prevalence of unrealistic gradients in convective activity in an operational model running with a 10 km grid spacing. The subsequent introduction of a framework that considers convective clouds as persistent objects, each possessing unique attributes that describe physically relevant cloud properties, appears to improve convective precipitation patterns by depicting realistic cloud memory, movement, and decay. Combined, this Lagrangian view of convection addresses one aspect of the convective gray zone problem and lays a foundation for more realistic treatments of the convective life cycle in parameterization schemes.
Abstract
A major set of changes was made to the Environment Canada global deterministic prediction system during the fall of 2014, including the replacement of four-dimensional variational data assimilation (4DVar) by four-dimensional ensemble–variational data assimilation (4DEnVar). The new system provides improved forecast accuracy relative to the previous system, based on results from two sets of two-month data assimilation and forecast experiments. The improvements are largest at shorter lead times, but significant improvements are maintained in the 120-h forecasts for most regions and vertical levels. The improvements result from the combined impact of numerous changes, in addition to the use of 4DEnVar. These include an improved treatment of radiosonde and aircraft observations, an improved radiance bias correction procedure, the assimilation of ground-based GPS data, a doubling of the number of assimilated channels from hyperspectral infrared sounders, and an improved approach for initializing model forecasts. Because of the replacement of 4DVar with 4DEnVar, the new system is also more computationally efficient and easier to parallelize, facilitating a doubling of the analysis increment horizontal resolution. Replacement of a full-field digital filter with the 4D incremental analysis update approach, and the recycling of several key variables that are not directly analyzed significantly reduced the model spinup during both the data assimilation cycle and in medium-range forecasts.
Abstract
A major set of changes was made to the Environment Canada global deterministic prediction system during the fall of 2014, including the replacement of four-dimensional variational data assimilation (4DVar) by four-dimensional ensemble–variational data assimilation (4DEnVar). The new system provides improved forecast accuracy relative to the previous system, based on results from two sets of two-month data assimilation and forecast experiments. The improvements are largest at shorter lead times, but significant improvements are maintained in the 120-h forecasts for most regions and vertical levels. The improvements result from the combined impact of numerous changes, in addition to the use of 4DEnVar. These include an improved treatment of radiosonde and aircraft observations, an improved radiance bias correction procedure, the assimilation of ground-based GPS data, a doubling of the number of assimilated channels from hyperspectral infrared sounders, and an improved approach for initializing model forecasts. Because of the replacement of 4DVar with 4DEnVar, the new system is also more computationally efficient and easier to parallelize, facilitating a doubling of the analysis increment horizontal resolution. Replacement of a full-field digital filter with the 4D incremental analysis update approach, and the recycling of several key variables that are not directly analyzed significantly reduced the model spinup during both the data assimilation cycle and in medium-range forecasts.
Abstract
An important step in an ensemble Kalman filter (EnKF) algorithm is the integration of an ensemble of short-range forecasts with a numerical weather prediction (NWP) model. A multiphysics approach is used in the Canadian global EnKF system. This paper explores whether the many integrations with different versions of the model physics can be used to obtain more accurate and more reliable probability distributions for the model parameters. Some model parameters have a continuous range of possible values. Other parameters are categorical and act as switches between different parameterizations. In an evolutionary algorithm, the member configurations that contribute most to the quality of the ensemble are duplicated, while adding a small perturbation, at the expense of configurations that perform poorly. The evolutionary algorithm is being used in the migration of the EnKF to a new version of the Canadian NWP model with upgraded physics. The quality of configurations is measured with both a deterministic and an ensemble score, using the observations assimilated in the EnKF system. When using the ensemble score in the evaluation, the algorithm is shown to be able to converge to non-Gaussian distributions. However, for several model parameters, there is not enough information to arrive at improved distributions. The optimized system features slight reductions in biases for radiance measurements that are sensitive to humidity. Modest improvements are also seen in medium-range ensemble forecasts.
Abstract
An important step in an ensemble Kalman filter (EnKF) algorithm is the integration of an ensemble of short-range forecasts with a numerical weather prediction (NWP) model. A multiphysics approach is used in the Canadian global EnKF system. This paper explores whether the many integrations with different versions of the model physics can be used to obtain more accurate and more reliable probability distributions for the model parameters. Some model parameters have a continuous range of possible values. Other parameters are categorical and act as switches between different parameterizations. In an evolutionary algorithm, the member configurations that contribute most to the quality of the ensemble are duplicated, while adding a small perturbation, at the expense of configurations that perform poorly. The evolutionary algorithm is being used in the migration of the EnKF to a new version of the Canadian NWP model with upgraded physics. The quality of configurations is measured with both a deterministic and an ensemble score, using the observations assimilated in the EnKF system. When using the ensemble score in the evaluation, the algorithm is shown to be able to converge to non-Gaussian distributions. However, for several model parameters, there is not enough information to arrive at improved distributions. The optimized system features slight reductions in biases for radiance measurements that are sensitive to humidity. Modest improvements are also seen in medium-range ensemble forecasts.
Abstract
Accurately representing model-based sources of uncertainty is essential for the development of reliable ensemble prediction systems for NWP applications. Uncertainties in discretizations, algorithmic approximations, and diabatic and unresolved processes combine to influence forecast skill in a flow-dependent way. An emerging approach designed to provide a process-level representation of these potential error sources, stochastically perturbed parameterizations (SPP), is introduced into the Canadian operational Global Ensemble Prediction System. This implementation extends the SPP technique beyond its typical application to free parameters in the physics suite by sampling uncertainty both within the dynamical core and at the formulation level using “error models” when multiple physical closures are available. Because SPP perturbs components within the model, internal consistency is ensured and conservation properties are not affected. The full SPP scheme is shown to increase ensemble spread to keep pace with error growth on a global scale. The sensitivity of the ensemble to each independently perturbed “element” is then assessed, with those responsible for the bulk of the response analyzed in more detail. Perturbations to surface exchange coefficients and the turbulent mixing length have a leading impact on near-surface statistics. Aloft, a tropically focused error model representing uncertainty in the advection scheme is found to initiate growing perturbations on the subtropical jet that lead to forecast improvements at higher latitudes. The results of Part I suggest that SPP has the potential to serve as a reliable representation of model uncertainty for ensemble NWP applications.
Significance Statement
Ensemble systems account for the negative impact that uncertainties in prediction models have on forecasts. Here, uncertain model parameters and algorithms are subjected to perturbations representing impact on forecast errors. By initiating error growth within the model calculations, the equally skillful members of the ensemble remain physically realistic and self-consistent, which is not guaranteed by other depictions of model error. This “stochastically perturbed parameterization” technique (SPP) comprises many small error sources, each analyzed in isolation. Each source is related to a limited set of processes, making it possible to determine how the individual perturbations affect the forecast. We conclude that SPP in the Canadian Global Ensemble Forecasting System produces realistic estimates of the impact of model uncertainties on forecast skill.
Abstract
Accurately representing model-based sources of uncertainty is essential for the development of reliable ensemble prediction systems for NWP applications. Uncertainties in discretizations, algorithmic approximations, and diabatic and unresolved processes combine to influence forecast skill in a flow-dependent way. An emerging approach designed to provide a process-level representation of these potential error sources, stochastically perturbed parameterizations (SPP), is introduced into the Canadian operational Global Ensemble Prediction System. This implementation extends the SPP technique beyond its typical application to free parameters in the physics suite by sampling uncertainty both within the dynamical core and at the formulation level using “error models” when multiple physical closures are available. Because SPP perturbs components within the model, internal consistency is ensured and conservation properties are not affected. The full SPP scheme is shown to increase ensemble spread to keep pace with error growth on a global scale. The sensitivity of the ensemble to each independently perturbed “element” is then assessed, with those responsible for the bulk of the response analyzed in more detail. Perturbations to surface exchange coefficients and the turbulent mixing length have a leading impact on near-surface statistics. Aloft, a tropically focused error model representing uncertainty in the advection scheme is found to initiate growing perturbations on the subtropical jet that lead to forecast improvements at higher latitudes. The results of Part I suggest that SPP has the potential to serve as a reliable representation of model uncertainty for ensemble NWP applications.
Significance Statement
Ensemble systems account for the negative impact that uncertainties in prediction models have on forecasts. Here, uncertain model parameters and algorithms are subjected to perturbations representing impact on forecast errors. By initiating error growth within the model calculations, the equally skillful members of the ensemble remain physically realistic and self-consistent, which is not guaranteed by other depictions of model error. This “stochastically perturbed parameterization” technique (SPP) comprises many small error sources, each analyzed in isolation. Each source is related to a limited set of processes, making it possible to determine how the individual perturbations affect the forecast. We conclude that SPP in the Canadian Global Ensemble Forecasting System produces realistic estimates of the impact of model uncertainties on forecast skill.
Abstract
The Global Environmental Multiscale (GEM) model is the Canadian atmospheric model used for meteorological forecasting at all scales. A limited-area version now also exists. It is a gridpoint model with an implicit semi-Lagrangian iterative space–time integration scheme. In the “horizontal,” the equations are written in spherical coordinates with the traditional shallow atmosphere approximations and are discretized on an Arakawa C grid. In the “vertical,” the equations were originally defined using a hydrostatic-pressure coordinate and discretized on a regular (unstaggered) grid, a configuration found to be particularly susceptible to noise. Among the possible alternatives, the Charney–Phillips grid, with its unique characteristics, and, as the vertical coordinate, log-hydrostatic pressure are adopted. In this paper, an attempt is made to justify these two choices on theoretical grounds. The resulting equations and their vertical discretization are described and the solution method of what is forming the new dynamical core of GEM is presented, focusing on these two aspects.
Abstract
The Global Environmental Multiscale (GEM) model is the Canadian atmospheric model used for meteorological forecasting at all scales. A limited-area version now also exists. It is a gridpoint model with an implicit semi-Lagrangian iterative space–time integration scheme. In the “horizontal,” the equations are written in spherical coordinates with the traditional shallow atmosphere approximations and are discretized on an Arakawa C grid. In the “vertical,” the equations were originally defined using a hydrostatic-pressure coordinate and discretized on a regular (unstaggered) grid, a configuration found to be particularly susceptible to noise. Among the possible alternatives, the Charney–Phillips grid, with its unique characteristics, and, as the vertical coordinate, log-hydrostatic pressure are adopted. In this paper, an attempt is made to justify these two choices on theoretical grounds. The resulting equations and their vertical discretization are described and the solution method of what is forming the new dynamical core of GEM is presented, focusing on these two aspects.