# Search Results

## You are looking at 51 - 60 of 83 items for

- Author or Editor: Chris Snyder x

- Refine by Access: All Content x

## Abstract

Ensemble Kalman filter (EnKF) techniques have been proposed for obtaining atmospheric state estimates on the scale of individual convective storms from radar and other observations, but tests of these methods with observations of real convective storms are still very limited. In the current study, radar observations of the 8 May 2003 Oklahoma City tornadic supercell thunderstorm were assimilated into the National Severe Storms Laboratory (NSSL) Collaborative Model for Multiscale Atmospheric Simulation (NCOMMAS) with an EnKF method. The cloud model employed 1-km horizontal grid spacing, a single-moment bulk precipitation-microphysics scheme, and a base state initialized with sounding data. A 50-member ensemble was produced by randomly perturbing base-state wind profiles and by regularly adding random local perturbations to the horizontal wind, temperature, and water vapor fields in and near observed precipitation.

In a reference experiment, only Doppler-velocity observations were assimilated into the NCOMMAS ensemble. Then, radar-reflectivity observations were assimilated together with Doppler-velocity observations in subsequent experiments. Influences that reflectivity observations have on storm-scale analyses were revealed through parameter-space experiments by varying observation availability, observation errors, ensemble spread, and choices for what model variables were updated when a reflectivity observation was assimilated. All experiments produced realistic storm-scale analyses that compared favorably with independent radar observations. Convective storms in the NCOMMAS ensemble developed more quickly when reflectivity observations and velocity observations were both assimilated rather than only velocity, presumably because the EnKF utilized covariances between reflectivity and unobserved model fields such as cloud water and vertical velocity in efficiently developing realistic storm features.

Recurring spatial patterns in the differences between predicted and observed reflectivity were noted particularly at low levels, downshear of the supercell’s updraft, in the anvil of moderate-to-light precipitation, where reflectivity in the model was typically lower than observed. Bias errors in the predicted rain mixing ratios and/or the size distributions that the bulk scheme associates with these mixing ratios are likely responsible for this reflectivity underprediction. When a reflectivity observation is assimilated, bias errors in the model fields associated with reflectivity (rain, snow, and hail–graupel) can be projected into other model variables through the ensemble covariances. In the current study, temperature analyses in the downshear anvil at low levels, where reflectivity was underpredicted, were very sensitive both to details of the assimilation algorithm and to ensemble spread in temperature. This strong sensitivity suggests low confidence in analyses of low-level cold pools obtained through reflectivity-data assimilation.

## Abstract

Ensemble Kalman filter (EnKF) techniques have been proposed for obtaining atmospheric state estimates on the scale of individual convective storms from radar and other observations, but tests of these methods with observations of real convective storms are still very limited. In the current study, radar observations of the 8 May 2003 Oklahoma City tornadic supercell thunderstorm were assimilated into the National Severe Storms Laboratory (NSSL) Collaborative Model for Multiscale Atmospheric Simulation (NCOMMAS) with an EnKF method. The cloud model employed 1-km horizontal grid spacing, a single-moment bulk precipitation-microphysics scheme, and a base state initialized with sounding data. A 50-member ensemble was produced by randomly perturbing base-state wind profiles and by regularly adding random local perturbations to the horizontal wind, temperature, and water vapor fields in and near observed precipitation.

In a reference experiment, only Doppler-velocity observations were assimilated into the NCOMMAS ensemble. Then, radar-reflectivity observations were assimilated together with Doppler-velocity observations in subsequent experiments. Influences that reflectivity observations have on storm-scale analyses were revealed through parameter-space experiments by varying observation availability, observation errors, ensemble spread, and choices for what model variables were updated when a reflectivity observation was assimilated. All experiments produced realistic storm-scale analyses that compared favorably with independent radar observations. Convective storms in the NCOMMAS ensemble developed more quickly when reflectivity observations and velocity observations were both assimilated rather than only velocity, presumably because the EnKF utilized covariances between reflectivity and unobserved model fields such as cloud water and vertical velocity in efficiently developing realistic storm features.

Recurring spatial patterns in the differences between predicted and observed reflectivity were noted particularly at low levels, downshear of the supercell’s updraft, in the anvil of moderate-to-light precipitation, where reflectivity in the model was typically lower than observed. Bias errors in the predicted rain mixing ratios and/or the size distributions that the bulk scheme associates with these mixing ratios are likely responsible for this reflectivity underprediction. When a reflectivity observation is assimilated, bias errors in the model fields associated with reflectivity (rain, snow, and hail–graupel) can be projected into other model variables through the ensemble covariances. In the current study, temperature analyses in the downshear anvil at low levels, where reflectivity was underpredicted, were very sensitive both to details of the assimilation algorithm and to ensemble spread in temperature. This strong sensitivity suggests low confidence in analyses of low-level cold pools obtained through reflectivity-data assimilation.

## Abstract

The characteristics of forecast-error covariances, which are of central interest in both data assimilation and ensemble forecasting, are poorly known. This paper considers the linear dynamics of these covariances and examines their evolution from (nearly) homogeneous and isotropic initial conditions in a turbulent quasigeostrophic flow qualitatively similar to that of the midlatitude troposphere. The experiments use ensembles of 100 solutions to estimate the error covariances. The error covariances evolve on a timescale of *O*(1 day), comparable to the advective timescale of the reference flow. This timescale also defines an initial period over which the errors develop characteristic features that are insensitive to the chosen initial statistics. These include 1) scales comparable to those of the reference flow, 2) potential vorticity (PV) concentrated where the gradient of the reference-flow PV is large, particularly at the surface and tropopause, and 3) little structure in the interior of the troposphere. In the error covariances, these characteristics are manifest as a strong spatial correlation between the PV variance and the magnitude of the reference-flow PV gradient and as a pronounced enhancement of the error correlations along reference-flow PV contours. The dynamical processes that result in such structure are also explored; the key is the advection of reference-flow PV by the error velocity, rather than the passive advection of the errors by the reference flow.

## Abstract

The characteristics of forecast-error covariances, which are of central interest in both data assimilation and ensemble forecasting, are poorly known. This paper considers the linear dynamics of these covariances and examines their evolution from (nearly) homogeneous and isotropic initial conditions in a turbulent quasigeostrophic flow qualitatively similar to that of the midlatitude troposphere. The experiments use ensembles of 100 solutions to estimate the error covariances. The error covariances evolve on a timescale of *O*(1 day), comparable to the advective timescale of the reference flow. This timescale also defines an initial period over which the errors develop characteristic features that are insensitive to the chosen initial statistics. These include 1) scales comparable to those of the reference flow, 2) potential vorticity (PV) concentrated where the gradient of the reference-flow PV is large, particularly at the surface and tropopause, and 3) little structure in the interior of the troposphere. In the error covariances, these characteristics are manifest as a strong spatial correlation between the PV variance and the magnitude of the reference-flow PV gradient and as a pronounced enhancement of the error correlations along reference-flow PV contours. The dynamical processes that result in such structure are also explored; the key is the advection of reference-flow PV by the error velocity, rather than the passive advection of the errors by the reference flow.

## Abstract

A perfect model Monte Carlo experiment was conducted to explore the characteristics of analysis error in a quasigeostrophic model. An ensemble of cycled analyses was created, with each member of the ensemble receiving different observations and starting from different forecast states. Observations were created by adding random error (consistent with observational error statistics) to vertical profiles extracted from truth run data. Assimilation of new observations was performed every 12 h using a three-dimensional variational analysis scheme. Three observation densities were examined, a low-density network (one observation ∼ every 20^{2} grid points), a moderate-density network (one observation ∼ every 10^{2} grid points), and a high-density network (∼ every 5^{2} grid points). Error characteristics were diagnosed primarily from a subset of 16 analysis times taken every 10 days from a long time series, with the first sample taken after a 50-day spinup. The goal of this paper is to understand the spatial, temporal, and some dynamical characteristics of analysis errors.

Results suggest a nonlinear relationship between observational data density and analysis error; there was a much greater reduction in error from the low- to moderate-density networks than from moderate to high density. Errors in the analysis reflected both structured errors created by the chaotic dynamics as well as random observational errors. The correction of the background toward the observations reduced the error but also randomized the prior dynamical structure of the errors, though there was a dependence of error structure on observational data density. Generally, the more observations, the more homogeneous the errors were in time and space and the less the analysis errors projected onto the leading backward Lyapunov vectors. Analyses provided more information at higher wavenumbers as data density increased. Errors were largest in the upper troposphere and smallest in the mid- to lower troposphere. Relatively small ensembles were effective in capturing a large percentage of the analysis-error variance, though more members were needed to capture a specified fraction of the variance as observation density increased.

## Abstract

A perfect model Monte Carlo experiment was conducted to explore the characteristics of analysis error in a quasigeostrophic model. An ensemble of cycled analyses was created, with each member of the ensemble receiving different observations and starting from different forecast states. Observations were created by adding random error (consistent with observational error statistics) to vertical profiles extracted from truth run data. Assimilation of new observations was performed every 12 h using a three-dimensional variational analysis scheme. Three observation densities were examined, a low-density network (one observation ∼ every 20^{2} grid points), a moderate-density network (one observation ∼ every 10^{2} grid points), and a high-density network (∼ every 5^{2} grid points). Error characteristics were diagnosed primarily from a subset of 16 analysis times taken every 10 days from a long time series, with the first sample taken after a 50-day spinup. The goal of this paper is to understand the spatial, temporal, and some dynamical characteristics of analysis errors.

Results suggest a nonlinear relationship between observational data density and analysis error; there was a much greater reduction in error from the low- to moderate-density networks than from moderate to high density. Errors in the analysis reflected both structured errors created by the chaotic dynamics as well as random observational errors. The correction of the background toward the observations reduced the error but also randomized the prior dynamical structure of the errors, though there was a dependence of error structure on observational data density. Generally, the more observations, the more homogeneous the errors were in time and space and the less the analysis errors projected onto the leading backward Lyapunov vectors. Analyses provided more information at higher wavenumbers as data density increased. Errors were largest in the upper troposphere and smallest in the mid- to lower troposphere. Relatively small ensembles were effective in capturing a large percentage of the analysis-error variance, though more members were needed to capture a specified fraction of the variance as observation density increased.

## Abstract

The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3D-variational data assimilation scheme. A perfect model is assumed.

Three perturbation methodologies are considered. The breeding and singular-vector (SV) methods approximate the strategies currently used at operational centers in the United States and Europe, respectively. The perturbed observation (PO) methodology approximates a random sample from the analysis probability density function (pdf) and is similar to the method performed at the Canadian Meteorological Centre. Initial conditions for the PO ensemble are analyses from independent, parallel data assimilation cycles. Each assimilation cycle utilizes observations perturbed by random noise whose statistics are consistent with observational error covariances. Each member’s assimilation/forecast cycle is also started from a distinct initial condition.

Relative to breeding and SV, the PO method here produced analyses and forecasts with desirable statistical characteristics. These include consistent rank histogram uniformity for all variables at all lead times, high spread/skill correlations, and calibrated, reduced-error probabilistic forecasts. It achieved these improvements primarily because 1) the ensemble mean of the PO initial conditions was more accurate than the mean of the bred or singular-vector ensembles, which were centered on a less-skilful control initial condition—much of the improvement was lost when PO initial conditions were recentered on the control analysis; and 2) by construction, the perturbed observation ensemble initial conditions permitted realistic variations in spread from day to day, while bred and singular-vector perturbations did not. These results suggest that in the absence of model error, an ensemble of initial conditions performs better when the initialization method is designed to produce random samples from the analysis pdf. The perturbed observation method did this much more satisfactorily than either the breeding or singular-vector methods.

The ability of the perturbed observation ensemble to sample randomly from the analysis pdf also suggests that such an ensemble can provide useful information on forecast covariances and hence improve future data assimilation techniques.

## Abstract

The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3D-variational data assimilation scheme. A perfect model is assumed.

Three perturbation methodologies are considered. The breeding and singular-vector (SV) methods approximate the strategies currently used at operational centers in the United States and Europe, respectively. The perturbed observation (PO) methodology approximates a random sample from the analysis probability density function (pdf) and is similar to the method performed at the Canadian Meteorological Centre. Initial conditions for the PO ensemble are analyses from independent, parallel data assimilation cycles. Each assimilation cycle utilizes observations perturbed by random noise whose statistics are consistent with observational error covariances. Each member’s assimilation/forecast cycle is also started from a distinct initial condition.

Relative to breeding and SV, the PO method here produced analyses and forecasts with desirable statistical characteristics. These include consistent rank histogram uniformity for all variables at all lead times, high spread/skill correlations, and calibrated, reduced-error probabilistic forecasts. It achieved these improvements primarily because 1) the ensemble mean of the PO initial conditions was more accurate than the mean of the bred or singular-vector ensembles, which were centered on a less-skilful control initial condition—much of the improvement was lost when PO initial conditions were recentered on the control analysis; and 2) by construction, the perturbed observation ensemble initial conditions permitted realistic variations in spread from day to day, while bred and singular-vector perturbations did not. These results suggest that in the absence of model error, an ensemble of initial conditions performs better when the initialization method is designed to produce random samples from the analysis pdf. The perturbed observation method did this much more satisfactorily than either the breeding or singular-vector methods.

The ability of the perturbed observation ensemble to sample randomly from the analysis pdf also suggests that such an ensemble can provide useful information on forecast covariances and hence improve future data assimilation techniques.

## Abstract

The usefulness of a distance-dependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This reduces noisiness and results in an improved background error covariance estimate, which generates a reduced-error ensemble of model initial conditions.

The benefits of applying the correlation function can be understood in part from examining the characteristics of simple 2 × 2 covariance matrices generated from random sample vectors with known variances and covariance. These show that noisiness in covariance estimates tends to overwhelm the signal when the ensemble size is small and/or the true covariance between the sample elements is small. Since the true covariance of forecast errors is generally related to the distance between grid points, covariance estimates generally have a higher ratio of noise to signal with increasing distance between grid points. This property is also demonstrated using a two-layer hemispheric primitive equation model and comparing covariance estimates generated by small and large ensembles. Covariances from the large ensemble are assumed to be accurate and are used a reference for measuring errors from covariances estimated from a small ensemble.

The benefits of including distance-dependent reduction of covariance estimates are demonstrated with an ensemble Kalman filter data assimilation scheme. The optimal correlation length scale of the filter function depends on ensemble size; larger correlation lengths are preferable for larger ensembles.

The effects of inflating background error covariance estimates are examined as a way of stabilizing the filter. It was found that more inflation was necessary for smaller ensembles than for larger ensembles.

## Abstract

The usefulness of a distance-dependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This reduces noisiness and results in an improved background error covariance estimate, which generates a reduced-error ensemble of model initial conditions.

The benefits of applying the correlation function can be understood in part from examining the characteristics of simple 2 × 2 covariance matrices generated from random sample vectors with known variances and covariance. These show that noisiness in covariance estimates tends to overwhelm the signal when the ensemble size is small and/or the true covariance between the sample elements is small. Since the true covariance of forecast errors is generally related to the distance between grid points, covariance estimates generally have a higher ratio of noise to signal with increasing distance between grid points. This property is also demonstrated using a two-layer hemispheric primitive equation model and comparing covariance estimates generated by small and large ensembles. Covariances from the large ensemble are assumed to be accurate and are used a reference for measuring errors from covariances estimated from a small ensemble.

The benefits of including distance-dependent reduction of covariance estimates are demonstrated with an ensemble Kalman filter data assimilation scheme. The optimal correlation length scale of the filter function depends on ensemble size; larger correlation lengths are preferable for larger ensembles.

The effects of inflating background error covariance estimates are examined as a way of stabilizing the filter. It was found that more inflation was necessary for smaller ensembles than for larger ensembles.

## Abstract

Safety compliance issues for operational studies of the atmosphere with balloons require quantifying risks associated with descent and developing strategies to reduce the uncertainties at the location of the touchdown point. Trajectory forecasts are typically computed from weather forecasts produced by an operational center, for example, the European Centre for Medium-Range Weather Forecasts. This study uses past experiments to investigate strategies for improving these forecasts. Trajectories for open stratospheric balloon (OSB) short-term flights are computed using mesoscale simulations with the Weather and Research Forecasting (WRF) Model initialized with ECMWF operational forecasts and are assimilated with radio soundings using the Data Assimilation Research Testbed (DART) ensemble Kalman filter, for three case studies during the Strapolété 2009 campaign in Sweden. The results are very variable: in one case, the error in the final simulated position is reduced by 90% relative to the forecast using the ECMWF winds, while in another case the forecast is hardly improved. Nonetheless, they reveal the main source of forecasting error: during the ceiling phase, errors due to unresolved inertia–gravity waves accumulate as the balloon continuously experiences one phase of a wave for a few hours, whereas they essentially average out during the ascent and descent phases, when the balloon rapidly samples through whole wave packets. This sensitivity to wind during the ceiling phase raises issues regarding the feasibility of such forecasts and the observations that would be needed. The ensemble spread is also analyzed, and it is noted that the initial ensemble perturbations should probably be improved in the future for better forecasts.

## Abstract

Safety compliance issues for operational studies of the atmosphere with balloons require quantifying risks associated with descent and developing strategies to reduce the uncertainties at the location of the touchdown point. Trajectory forecasts are typically computed from weather forecasts produced by an operational center, for example, the European Centre for Medium-Range Weather Forecasts. This study uses past experiments to investigate strategies for improving these forecasts. Trajectories for open stratospheric balloon (OSB) short-term flights are computed using mesoscale simulations with the Weather and Research Forecasting (WRF) Model initialized with ECMWF operational forecasts and are assimilated with radio soundings using the Data Assimilation Research Testbed (DART) ensemble Kalman filter, for three case studies during the Strapolété 2009 campaign in Sweden. The results are very variable: in one case, the error in the final simulated position is reduced by 90% relative to the forecast using the ECMWF winds, while in another case the forecast is hardly improved. Nonetheless, they reveal the main source of forecasting error: during the ceiling phase, errors due to unresolved inertia–gravity waves accumulate as the balloon continuously experiences one phase of a wave for a few hours, whereas they essentially average out during the ascent and descent phases, when the balloon rapidly samples through whole wave packets. This sensitivity to wind during the ceiling phase raises issues regarding the feasibility of such forecasts and the observations that would be needed. The ensemble spread is also analyzed, and it is noted that the initial ensemble perturbations should probably be improved in the future for better forecasts.

## Abstract

Suppose that one has the freedom to adapt the observational network by choosing the times and locations of observations. Which choices would yield the best analysis of the atmospheric state or the best subsequent forecast? Here, this problem of “adaptive observations” is formulated as a problem in statistical design. The statistical framework provides a rigorous mathematical statement of the adaptive observations problem and indicates where the uncertainty of the current analysis, the dynamics of error evolution, the form and errors of observations, and data assimilation each enter the calculation. The statistical formulation of the problem also makes clear the importance of the optimality criteria (for instance, one might choose to minimize the total error variance in a given forecast) and identifies approximations that make calculation of optimal solutions feasible in principle. Optimal solutions are discussed and interpreted for a variety of cases. Selected approaches to the adaptive observations problem found in the literature are reviewed and interpreted from the optimal statistical design viewpoint. In addition, a numerical example, using the 40-variable model of Lorenz and Emanuel, suggests that some other proposed approaches may often be close to the optimal solution, at least in this highly idealized model.

## Abstract

Suppose that one has the freedom to adapt the observational network by choosing the times and locations of observations. Which choices would yield the best analysis of the atmospheric state or the best subsequent forecast? Here, this problem of “adaptive observations” is formulated as a problem in statistical design. The statistical framework provides a rigorous mathematical statement of the adaptive observations problem and indicates where the uncertainty of the current analysis, the dynamics of error evolution, the form and errors of observations, and data assimilation each enter the calculation. The statistical formulation of the problem also makes clear the importance of the optimality criteria (for instance, one might choose to minimize the total error variance in a given forecast) and identifies approximations that make calculation of optimal solutions feasible in principle. Optimal solutions are discussed and interpreted for a variety of cases. Selected approaches to the adaptive observations problem found in the literature are reviewed and interpreted from the optimal statistical design viewpoint. In addition, a numerical example, using the 40-variable model of Lorenz and Emanuel, suggests that some other proposed approaches may often be close to the optimal solution, at least in this highly idealized model.

## Abstract

Cyclonic vortices on the tropopause are characterized by compact structure and larger pressure, wind, and temperature perturbations when compared to broader and weaker anticyclones. Neither the origin of these vortices nor the reasons for the preferred asymmetries are completely understood; quasigeostrophic dynamics, in particular, have cyclone–anticyclone symmetry.

In order to explore these and related problems, a novel small Rossby number approximation is introduced to the primitive equations applied to a simple model of the tropopause in continuously stratified fluid. This model resolves dynamics that give rise to vortical asymmetries, while retaining both the conceptual simplicity of quasigeostrophic dynamics and the computational economy of two-dimensional flows. The model contains no depth-independent (barotropic) flow, and thus may provide a useful comparison to two-dimensional flows dominated by this flow component.

Solutions for random initial conditions (i.e., freely decaying turbulence) exhibit vortical asymmetries typical of tropopause observations, with strong localized cyclones, and weaker diffuse anticyclones. Cyclones cluster around a distinct length scale at a given time, whereas anticyclones do not. These results differ significantly from previous studies of cyclone–anticyclone asymmetry in the shallow-water primitive equations and the periodic balance equations. An important source of asymmetry in the present solutions is divergent flow associated with frontogenesis and the forward cascade of tropopause potential temperature variance. This thermally direct flow changes the mean potential temperature of the tropopause, selectively maintains anticyclonic filaments relative to cyclonic filaments, and appears to promote the merger of anticyclones relative to cyclones.

## Abstract

Cyclonic vortices on the tropopause are characterized by compact structure and larger pressure, wind, and temperature perturbations when compared to broader and weaker anticyclones. Neither the origin of these vortices nor the reasons for the preferred asymmetries are completely understood; quasigeostrophic dynamics, in particular, have cyclone–anticyclone symmetry.

In order to explore these and related problems, a novel small Rossby number approximation is introduced to the primitive equations applied to a simple model of the tropopause in continuously stratified fluid. This model resolves dynamics that give rise to vortical asymmetries, while retaining both the conceptual simplicity of quasigeostrophic dynamics and the computational economy of two-dimensional flows. The model contains no depth-independent (barotropic) flow, and thus may provide a useful comparison to two-dimensional flows dominated by this flow component.

Solutions for random initial conditions (i.e., freely decaying turbulence) exhibit vortical asymmetries typical of tropopause observations, with strong localized cyclones, and weaker diffuse anticyclones. Cyclones cluster around a distinct length scale at a given time, whereas anticyclones do not. These results differ significantly from previous studies of cyclone–anticyclone asymmetry in the shallow-water primitive equations and the periodic balance equations. An important source of asymmetry in the present solutions is divergent flow associated with frontogenesis and the forward cascade of tropopause potential temperature variance. This thermally direct flow changes the mean potential temperature of the tropopause, selectively maintains anticyclonic filaments relative to cyclonic filaments, and appears to promote the merger of anticyclones relative to cyclones.

## Abstract

A recent study examined the predictability of an idealized baroclinic wave amplifying in a conditionally unstable atmosphere through numerical simulations with parameterized moist convection. It was demonstrated that with the effect of moisture included, the error starting from small random noise is characterized by upscale growth in the short-term (0–36 h) forecast of a growing synoptic-scale disturbance. The current study seeks to explore further the mesoscale error-growth dynamics in idealized moist baroclinic waves through convection-permitting experiments with model grid increments down to 3.3 km. These experiments suggest the following three-stage error-growth model: in the initial stage, the errors grow from small-scale convective instability and then quickly [*O*(1 h)] saturate at the convective scales. In the second stage, the character of the errors changes from that of convective-scale unbalanced motions to one more closely related to large-scale balanced motions. That is, some of the error from convective scales is retained in the balanced motions, while the rest is radiated away in the form of gravity waves. In the final stage, the large-scale (balanced) components of the errors grow with the background baroclinic instability. Through examination of the error-energy budget, it is found that buoyancy production due mostly to moist convection is comparable to shear production (nonlinear velocity advection). It is found that turning off latent heating not only dramatically decreases buoyancy production, but also reduces shear production to less than 20% of its original amplitude.

## Abstract

A recent study examined the predictability of an idealized baroclinic wave amplifying in a conditionally unstable atmosphere through numerical simulations with parameterized moist convection. It was demonstrated that with the effect of moisture included, the error starting from small random noise is characterized by upscale growth in the short-term (0–36 h) forecast of a growing synoptic-scale disturbance. The current study seeks to explore further the mesoscale error-growth dynamics in idealized moist baroclinic waves through convection-permitting experiments with model grid increments down to 3.3 km. These experiments suggest the following three-stage error-growth model: in the initial stage, the errors grow from small-scale convective instability and then quickly [*O*(1 h)] saturate at the convective scales. In the second stage, the character of the errors changes from that of convective-scale unbalanced motions to one more closely related to large-scale balanced motions. That is, some of the error from convective scales is retained in the balanced motions, while the rest is radiated away in the form of gravity waves. In the final stage, the large-scale (balanced) components of the errors grow with the background baroclinic instability. Through examination of the error-energy budget, it is found that buoyancy production due mostly to moist convection is comparable to shear production (nonlinear velocity advection). It is found that turning off latent heating not only dramatically decreases buoyancy production, but also reduces shear production to less than 20% of its original amplitude.

## Abstract

Vortex dipoles provide a simple representation of localized atmospheric jets. Numerical simulations of a synoptic-scale dipole in surface potential temperature are considered in a rotating, stratified fluid with approximately uniform potential vorticity. Following an initial period of adjustment, the dipole propagates along a slightly curved trajectory at a nearly steady rate and with a nearly fixed structure for more than 50 days. Downstream from the jet maximum, the flow also contains smaller-scale, upward-propagating inertia–gravity waves that are embedded within and stationary relative to the dipole. The waves form elongated bows along the leading edge of the dipole. Consistent with propagation in horizontal deformation and vertical shear, the waves’ horizontal scale shrinks and the vertical slope varies as they approach the leading stagnation point in the dipole’s flow. Because the waves persist for tens of days despite explicit dissipation in the numerical model that would otherwise damp the waves on a time scale of a few hours, they must be inherent features of the dipole itself, rather than remnants of imbalances in the initial conditions. The wave amplitude varies with the strength of the dipole, with waves becoming obvious once the maximum vertical vorticity in the dipole is roughly half the Coriolis parameter. Possible mechanisms for the wave generation are spontaneous wave emission and the instability of the underlying balanced dipole.

## Abstract

Vortex dipoles provide a simple representation of localized atmospheric jets. Numerical simulations of a synoptic-scale dipole in surface potential temperature are considered in a rotating, stratified fluid with approximately uniform potential vorticity. Following an initial period of adjustment, the dipole propagates along a slightly curved trajectory at a nearly steady rate and with a nearly fixed structure for more than 50 days. Downstream from the jet maximum, the flow also contains smaller-scale, upward-propagating inertia–gravity waves that are embedded within and stationary relative to the dipole. The waves form elongated bows along the leading edge of the dipole. Consistent with propagation in horizontal deformation and vertical shear, the waves’ horizontal scale shrinks and the vertical slope varies as they approach the leading stagnation point in the dipole’s flow. Because the waves persist for tens of days despite explicit dissipation in the numerical model that would otherwise damp the waves on a time scale of a few hours, they must be inherent features of the dipole itself, rather than remnants of imbalances in the initial conditions. The wave amplitude varies with the strength of the dipole, with waves becoming obvious once the maximum vertical vorticity in the dipole is roughly half the Coriolis parameter. Possible mechanisms for the wave generation are spontaneous wave emission and the instability of the underlying balanced dipole.