Search Results

You are looking at 1 - 10 of 83 items for

  • Author or Editor: Timothy DelSole x
  • Refine by Access: All Content x
Clear All Modify Search
Timothy DelSole

Abstract

A stochastic model for shear-flow turbulence is constructed under the constraint that the parameterized nonlinear eddy–eddy interactions conserve energy but dissipate potential enstrophy. This parameterization is appropriate for truncated models of quasigeostrophic turbulence that cascade potential enstrophy to subgrid scales. The parameterization is not closed but constitutes a rigorous starting point for more thorough parameterizations. A major simplification arises from the fact that independently forced spatial structures produce covariances that can be superposed linearly. The constrained stochastic model cannot sustain turbulence when dissipation is strong or when the mean shear is weak because the prescribed forcing structures extract potential enstrophy from the mean flow at a rate too slow to sustain a transfer to subgrid scales. The constraint therefore defines a transition shear separating states in which turbulence is possible from those in which it is impossible. The transition shear, which depends on forcing structure, achieves an absolute minimum value when the forcing structures are optimal, in the sense of maximizing enstrophy production minus dissipation by large-scale eddies.

The results are illustrated with a quasigeostrophic model with eddy dissipation parameterized by spatially uniform potential vorticity damping. The transition shear associated with spatially localized random forcing and with reasonable eddy dissipation is close to the correct turbulence transition point determined by numerical simulation of the fully nonlinear system. In contrast, the transition shear corresponding to the optimal forcing functions is unrealistically small, suggesting that at weak shears these structures are weakly excited by nonlinear interactions. Nevertheless, the true forcing structures must project on the optimal forcing structures to sustain a turbulent cascade. Because of this property and their small number, the leading optimal forcing functions may be an attractive basis set for reducing the dimensionality of the parameterization problem.

Full access
Timothy DelSole

Abstract

An optimal perturbation is an initial condition that optimizes some measure of amplitude growth over a prescribed time in a linear system. Previous studies have argued that optimal perturbations play an important role in turbulence. Two basic questions related to this theory are whether optimal perturbations necessarily grow in all turbulent background flows and whether the turbulent flow necessarily excites optimal perturbations at the rate required to account for the observed eddy variance. This paper shows that both questions can be answered in the affirmative for statistically steady turbulence. More precisely, it is shown that eddies in statistically stationary turbulence must project onto a class of amplifying perturbations called instantaneous optimals, which are defined as initial conditions that optimize the rate of change of energy associated with the dynamical system linearized about the time-mean flow. An analogous conclusion holds for potential enstrophy when the latter satisfies a similar conservation principle. It is shown that the growing instantaneous optimals imply the existence of growing finite-time singular vectors. Moreover, the average projection on the growing instantaneous optimals must be sufficient to balance the average projection on all other eddies. In contrast to most other types of optimal perturbations, the phase space spanned by the growing instantaneous optimals is independent of the norm used to measure the initial amplitude. This paper also proves that growing instantaneous optimals must exist and play a significant role in nonlinear vacillation phenomena. The argument put forward here follows essentially from statistical equilibrium and conservation of energy, and is independent of any closure theory of turbulence.

Full access
Timothy DelSole

Abstract

This paper tests the hypothesis that optimal perturbations in quasigeostrophic turbulence are excited sufficiently strongly and frequently to account for the energy-containing eddies. Optimal perturbations are defined here as singular vectors of the propagator, for the energy norm, corresponding to the equations of motion linearized about the time-mean flow. The initial conditions are drawn from a numerical solution of the nonlinear equations associated with the linear propagator. Experiments confirm that energy is concentrated in the leading evolved singular vectors, and that the average energy in the initial singular vectors is within an order of magnitude of that required to explain the average energy in the evolved singular vectors. Furthermore, only a small number of evolved singular vectors (4 out of 4000) are needed to explain the dominant eddy structure when total energy exceeds a predefined threshold. The initial singular vectors explain only 10% of such events, but this discrepancy was similar to that of the full propagator, suggesting that it arises primarily due to errors in the propagator. In the limit of short lead times, energy conservation can be expressed in terms of suitable singular vectors to constrain the energy distribution of the singular vectors in statistically steady equilibrium. This and other connections between linear optimals and nonlinear dynamics suggests that the positive results found here should carry over to other systems, provided the propagator and initial states are chosen consistently with respect to the nonlinear system.

Full access
Timothy DelSole

Abstract

A simple model for transient eddy momentum fluxes is constructed based on the hypothesis that planetary waves radiating from low-level baroclinic eddies act effectively as a random forcing of the upper troposphere. The response of the upper troposphere is modeled by the barotropic, nondivergent vorticity equation, linearized about a zonally symmetric flow. Rayleigh friction and scale-selective damping are included to crudely represent the effect of vertical diffusion and nonlinear equilibration mechanisms; the associated damping rates constitute the only tunable parameters of the model. A distinguishing feature of the model is that wave-activity theory is used to specify the statistics of the random forcing as a function of the background flow and low-level eddy heat fluxes. The resulting model simulates the annual cycle of momentum fluxes reasonably well, but tends to underestimate the eddy kinetic energy in midlatitudes. The results are interpreted in light of an analytic solution to the linear response of a barotropic model to random forcing.

Full access
Timothy DelSole

Abstract

Recent studies reveal that randomly forced linear models can produce realistic statistics for inhomogeneous turbulence. The random forcing and linear dissipation in these models parameterize the effect of nonlinear interactions. Due to lack of a reasonable theory to do otherwise, many studies assume that the random forcing is homogeneous. In this paper, the homogeneous assumption is shown to fail in systems with sufficiently localized jets. An alternative theory is proposed whereby the rate of variance production by the random forcing and dissipation are assumed to be proportional to the variance of the response at every point in space. In this way, the stochastic forcing produces a response that drives itself. Different theories can be formulated according to different metrics for measuring “variance.” This paper gives a methodology for obtaining the solution to such theories and the conditions that guarantee that the solution is unique. An explicit hypothesis for large-scale, rotating flows is put forward based on local potential enstrophy as a measure of eddy variance. This theory, together with conservation of energy, determines all the parameters of the stochastic model, except one, namely, the multiplicative constant specifying the overall magnitude of the eddies. Comparison of this and more general theories to both nonlinear simulations and to assimilated datasets are found to be encouraging.

Full access
Timothy DelSole

Abstract

A technique is described for determining the set of patterns in a time-varying field whose corresponding time series remain correlated for the longest times. The basic idea is to obtain patterns that, when projected on a time-varying field, produce time series that optimize a measure of decorrelation time. The decorrelation time is measured by one of the integrals
i1520-0469-58-11-1341-eq1
where τ is the time lag and ρ τ is the correlation function of the time series. These integrals arise naturally in sampling theory and power spectra analysis. Moreover, these integrals define the maximum lead time beyond which linear prediction models lose all forecast skill. Thus, an optimally persistent pattern is interesting because it optimizes a quantity that is of fundamental and practical importance. An orthogonal set of time series that optimize these integrals can be obtained from the lagged covariance matrix of the dataset. The corresponding patterns, called optimal persistence patterns (OPPs), may provide a useful basis set for statistical prediction models, because they may remain correlated for much longer periods than individual empirical orthogonal functions (EOFs). The main shortcoming of OPPs is that they are sensitive to sampling errors. To reduce the sensitivity, the upper limit of integration and the basis set used to define the pattern need to be as small as possible, yet large enough to resolve the space–time structure of the pattern.

Examples of OPPs are presented for the Lorenz model and the daily anomaly 500-hPa geopotential height fields. In the case of the Lorenz model, the technique is shown to be far superior at capturing persistent, oscillatory signals than other techniques. As for geopotential height, the technique reveals that the absolute longest decorrelation time, in the space spanned by the first few dozen EOFs, is 12–15 days. It is perhaps noteworthy that this time is virtually identical to the theoretical limit of atmospheric predictability determined in previous studies. This result suggests that the monthly anomaly in this state space, which is often used to study long-term climate variability, arises not from a perturbation that lasts for a month, but rather from a few “episodes” often lasting less than 2 weeks. Depending on the number of EOFs and on which measure of decorrelation time is considered, the leading OPP resembles the Arctic oscillation. The second OPP is associated with an apparent discontinuity around March 1977. The OPP that minimizes decorrelation time (the “trailing OPP”) is associated with synoptic eddies along storm tracks.

The technique not only finds persistent signals in stationary data, but also finds trends, discontinuities, and other low-frequency signals in nonstationary data. Indeed, for datasets containing both a random component and a nonstationary component, maximizing decorrelation time is shown to be equivalent to maximizing the signal-to-noise ratio of low-frequency variations. The technique is especially attractive in this regard because it is very efficient and requires no preconceived notion about the form of the nonstationary signal.

Full access
Timothy DelSole

Abstract

This paper gives an introduction to the connection between predictability and information theory, and derives new connections between these concepts. A system is said to be unpredictable if the forecast distribution, which gives the most complete description of the future state based on all available knowledge, is identical to the climatological distribution, which describes the state in the absence of time lag information. It follows that a necessary condition for predictability is for the forecast and climatological distributions to differ. Information theory provides a powerful framework for quantifying the difference between two distributions that agrees with intuition about predictability. Three information theoretic measures have been proposed in the literature: predictive information, relative entropy, and mutual information. These metrics are discussed with the aim of clarifying their similarities and differences. All three metrics have attractive properties for defining predictability, including the fact that they are invariant with respect to nonsingular linear transformations, decrease monotonically in stationary Markov systems in some sense, and are easily decomposed into components that optimize them (in certain cases). Relative entropy and predictive information have the same average value, which in turn equals the mutual information. Optimization of mutual information leads naturally to canonical correlation analysis, when the variables are joint normally distributed. Closed form expressions of these metrics for finite dimensional, stationary, Gaussian, Markov systems are derived. Relative entropy and predictive information differ most significantly in that the former depends on the “signal to noise ratio” of a single forecast distribution, whereas the latter does not. Part II of this paper discusses the extension of these concepts to imperfect forecast models.

Full access
Timothy DelSole

Abstract

This paper presents a framework for quantifying predictability based on the behavior of imperfect forecasts. The critical quantity in this framework is not the forecast distribution, as used in many other predictability studies, but the conditional distribution of the state given the forecasts, called the regression forecast distribution. The average predictability of the regression forecast distribution is given by a quantity called the mutual information. Standard inequalities in information theory show that this quantity is bounded above by the average predictability of the true system and by the average predictability of the forecast system. These bounds clarify the role of potential predictability, of which many incorrect statements can be found in the literature. Mutual information has further attractive properties: it is invariant with respect to nonlinear transformations of the data, cannot be improved by manipulating the forecast, and reduces to familiar measures of correlation skill when the forecast and verification are joint normally distributed. The concept of potential predictable components is shown to define a lower-dimensional space that captures the full predictability of the regression forecast without loss of generality. The predictability of stationary, Gaussian, Markov systems is examined in detail. Some simple numerical examples suggest that imperfect forecasts are not always useful for joint normally distributed systems since greater predictability often can be obtained directly from observations. Rather, the usefulness of imperfect forecasts appears to lie in the fact that they can identify potential predictable components and capture nonstationary and/or nonlinear behavior, which are difficult to capture by low-dimensional, empirical models estimated from short historical records.

Full access
Timothy DelSole

Abstract

This paper presents a framework based on Bayesian regression and constrained least squares methods for incorporating prior beliefs in a linear regression problem. Prior beliefs are essential in regression theory when the number of predictors is not a small fraction of the sample size, a situation that leads to overfitting—that is, to fitting variability due to sampling errors. Under suitable assumptions, both the Bayesian estimate and the constrained least squares solution reduce to standard ridge regression. New generalizations of ridge regression based on priors relevant to multimodel combinations also are presented. In all cases, the strength of the prior is measured by a parameter called the ridge parameter. A “two-deep” cross-validation procedure is used to select the optimal ridge parameter and estimate the prediction error.

The proposed regression estimates are tested on the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) hindcasts of seasonal mean 2-m temperature over land. Surprisingly, none of the regression models proposed here can consistently beat the skill of a simple multimodel mean, despite the fact that one of the regression models recovers the multimodel mean in a suitable limit. This discrepancy arises from the fact that methods employed to select the ridge parameter are themselves sensitive to sampling errors. It is plausible that incorporating the prior belief that regression parameters are “large scale” can reduce overfitting and result in improved performance relative to the multimodel mean. Despite this, results from the multimodel mean demonstrate that seasonal mean 2-m temperature is predictable for at least three months in several regions.

Full access
Timothy DelSole

Abstract

A two-layer quasigeostrophic model is used to investigate whether dissipation can induce absolute instability in otherwise convectively unstable or stable background states. It is shown that dissipation of either temperature or lower-layer potential vorticity can cause absolute instabilities over a wide range of parameter values and over a wide range of positive lower-layer velocities (for positive vertical shear). It is further shown that these induced absolute instabilities can be manifested as local instabilities with similar properties. Compared to the previously known absolute instabilities, the induced absolute instabilities are characterized by larger scales, weaker absolute growth rates, and substantially weaker vertical phase tilt (typical values for subtropical states are zonal wavenumber 1–3, absolute growth rate 80–100 days, and period 7–10 days).

The analysis of absolute instabilities, including the case of multiple absolute instabilities, is reviewed in an . Because the dispersion relation of the two-layer model can be written as a polynomial in both wavenumber and frequency, all possible saddle points and poles of the dispersion relation can be determined directly. An unusual feature of induced absolute instabilities is that the absolute growth rate can change discontinuously for small changes in the basic-state parameters. The occurrence of a discontinuity in the secondary instability is not limited to the two-layer model but is a general possibility in any system involving multiple absolute instabilities. Depending on the location of the discontinuity relative to the packet peak, a purely local analysis, as used in many numerical techniques, would extrapolate the secondary absolute instability to incorrect regions of parameter space or fail to detect the secondary absolute instability altogether. An efficient procedure for identifying absolute instabilities that accounts for these issues is developed and applied to the two-layer model.

Full access