Search Results

You are looking at 1 - 4 of 4 items for

  • Author or Editor: Jonathan A. Weyn x
  • All content x
Clear All Modify Search
Jonathan A. Weyn and Dale R. Durran

Abstract

Recent work has suggested that modest initial relative errors on scales of O(100) km in a numerical weather forecast may exert more control on the predictability of mesoscale convective systems at lead times beyond about 5 h than 100% relative errors at smaller scales. Using an idealized model, the predictability of deep convection organized by several different profiles of environmental vertical wind shear is investigated as a function of the horizontal scale and amplitude of initial errors in the low-level moisture field. Small- and large-scale initial errors are found to have virtually identical impacts on predictability at lead times of 4–5 h for all wind shear profiles. Both small- and large-scale errors grow primarily up in amplitude at all scales rather than through an upscale cascade between adjacent scales. Reducing the amplitude of the initial errors improves predictability lead times, but this improvement diminishes with further reductions in the error amplitude, suggesting a limit to the intrinsic predictability in these simulations of slightly more than 6 h at scales less than 20 km. Additionally, all the simulated convective systems produce a k −5/3 spectrum of kinetic energy, providing evidence of the importance of the unbalanced, divergent gravity wave component of the flow produced by thunderstorms in generating the observed atmospheric kinetic energy spectrum.

Full access
Dale R. Durran and Jonathan A. Weyn

Abstract

One important limitation on the accuracy of weather forecasts is imposed by unavoidable errors in the specification of the atmosphere’s initial state. Much theoretical concern has been focused on the limits to predictability imposed by small-scale errors, potentially even those on the scale of a butterfly. Very modest errors at much larger scales may nevertheless pose a more important practical limitation. We demonstrate the importance of large-scale uncertainty by analyzing ensembles of idealized squall-line simulations. Our results imply that minimizing initial errors on scales around 100 km is more likely to extend the accuracy of forecasts at lead times longer than 3–4 h than efforts to minimize initial errors on much smaller scales. These simulations also demonstrate that squall lines, triggered in a horizontally homogeneous environment with no initial background circulations, can generate a background mesoscale kinetic energy spectrum roughly similar to that observed in the atmosphere.

Full access
Jonathan A. Weyn and Dale R. Durran

Abstract

Idealized ensemble simulations of mesoscale convective systems (MCSs) with horizontal grid spacings of 1, 1.4, and 2 km are used to analyze the influence of numerical resolution on the rate of growth of ensemble spread in convection-resolving numerical models. The ensembles are initialized with random phases of 91-km-wavelength moisture perturbations that are captured with essentially identical accuracy at all resolutions. The rate of growth of ensemble variance is shown to systematically increase at higher resolution. The largest horizontal wavelength at which the perturbation kinetic energy (KE′) grows to at least 50% of the background kinetic energy spectrum is also shown to grow more rapidly at higher resolution. The mechanism by which the presence of smaller scales accelerates the upscale growth of KE′ is clear-cut in the smooth-saturation Lorenz–Rotunno–Snyder (ssLRS) model of homogeneous surface quasigeostrophic turbulence. Comparing the growth of KE′ from the MCS ensemble simulations to that in the ssLRS model suggests interactions between perturbations at small scales, where KE′ is not yet completely saturated, and somewhat larger scales, where KE′ is clearly unsaturated, are responsible for the faster growth rate of ensemble variance at finer resolution. These results provide some empirical justification for the use of deep-convection-related stochastic parameterization schemes to reduce the problem of underdispersion in coarser-resolution ensemble prediction systems.

Full access
Dale Durran, Jonathan A. Weyn, and Maximo Q. Menchaca

Abstract

Spectra are often computed from gridded data to determine the horizontal-scale dependence of quantities such as kinetic energy, vertical velocity, or perturbation potential temperature. This paper discusses several important considerations for the practical computation of such spectra. To ensure that the sum of the spectral energy densities in wavenumber space matches the sum of the energies in the physical domain (the discrete Parseval relation), the constant coefficient multiplying the spectral energy density must properly account for the way the discrete Fourier transform pair is normalized. The normalization factor appropriate of many older FORTRAN-based fast Fourier transforms (FFTs) differs from that in Matlab and Python’s numpy.fft, and as a consequence, the correct scaling factor for the kinetic energy (KE) spectral density differs between one-dimensional FFTs computed using these two approaches by a factor equal to the square of the number of physical grid points. A common algorithm used to compute two-dimensional spectra as a function of the total-wavenumber magnitude sums the contributions from all pairs of x- and y-component wavenumbers whose vector magnitude lies with a series of bins. This approach introduces systematic short-wavelength noise, which can be largely eliminated though a simple multiplicative correction. One- and two-dimensional spectra will differ by a constant if computed for flows in which the KE spectral density decreases as a function of the wavenumber to some negative power. This constant is evaluated and the extension of theoretical results to numerically computed FFTs is examined.

Full access