• Ambaum, M. H. P., B. J. Hoskins, and D. B. Stephenson, 2001: Arctic Oscillation or North Atlantic Oscillation? J. Climate, 14 , 34953507.

    • Search Google Scholar
    • Export Citation
  • Arnold, L., 1992: Stochastic Differential Equations: Theory and Applications. Krieger, 228 pp.

  • Drazin, P. G., and W. H. Reid, 1981: Hydrodynamic Stability. Cambridge University Press, 527 pp.

  • Ehrendorfer, M., and J. J. Tribbia, 1997: Optimal prediction of forecast error covariances through singular vectors. J. Atmos. Sci., 54 , 286313.

    • Search Google Scholar
    • Export Citation
  • Farrell, B. F., and P. J. Ioannou, 1994: A theory for the statistical equilibrium energy and heat flux produced by transient baroclinic waves. J. Atmos. Sci., 51 , 26852698.

    • Search Google Scholar
    • Export Citation
  • Farrell, B. F., 1995: Stochastic dynamics of the midlatitude atmospheric jet. J. Atmos. Sci., 52 , 16421656.

  • Farrell, B. F., 1996: Generalized stability. Part I: Autonomous operators. J. Atmos. Sci., 53 , 20252041.

  • Farrell, B. F., 1998: Perturbation structure and spectra in turbulent channel flow. Theor. Comput. Fluid Dyn., 11 , 215227.

  • Farrell, B. F., 1999: Perturbation growth and structure in time-dependent flows. J. Atmos. Sci., 56 , 36223639.

  • Farrell, B. F., 2000: Perturbation dynamics in atmospheric chemistry. J. Geophys. Res., 105 ((D7),) 93039320.

  • Farrell, B. F., 2002: Perturbation growth and structure in uncertain flows. Part I. J. Atmos. Sci., 59 , 26292646.

  • Horn, R. A., and C. R. Johnson, 1991: Topics in Matrix Analysis. Cambridge University Press, 607 pp.

  • Hughston, L. P., R. Jozsa, and W. K. Wooters, 1993: A complete classification of quantum ensembles having a given density matrix. Phys. Lett., 183A , 1418.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF ensemble prediction system: Methodology and validation. Quart. J. Roy. Meteor. Soc., 122 , 73119.

    • Search Google Scholar
    • Export Citation
  • Palmer, T. N., 2001: A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parameterization in weather and climate prediction models. Quart. J. Roy. Meteor. Soc., 127 , 279304.

    • Search Google Scholar
    • Export Citation
  • Sardeshmukh, P., C. Penland, and M. Newman, 2001: Rossby waves in a stochastically fluctuating medium. Progress in Probability. P. Imkeller and J. S. von Storch, Eds., Vol. 49, Birkhauser Verlag, 369–384.

    • Search Google Scholar
    • Export Citation
  • Schrödinger, E., 1936: Probability relations between separated systems. Proc. Cambridge Philos. Soc., 32 , 446452.

  • Van Huffel, S., and J. Vandewalle, 1991: The Total Least Squares Problem: Computational Aspects and Analysis. SIAM, 300 pp.

  • Van Kampen, N. G., 1992: Stochastic Processes in Physics and Chemistry. Elsevier, 465 pp.

  • View in gallery

    Schematic evolution of a sure initial condition ψ(0) in an uncertain system. After time t the evolved states ψ(t) lie in the region shown. Initially the covariance matrix 𝗖(0) = ψ(0)ψ(0) is rank one, but at time t the covariance matrix is of rank greater than one. For example, if the final states were ψi(t)(i = 1, … , 4) with equal probability, the covariance at time t[𝗖(t) = 1/4 Σ4i=1 ψi(t)ψi(t)] would be rank four, representing a mixed state. By contrast, in certain systems the rank of the covariance matrix is invariant and a pure state evolves to a pure state

  • View in gallery

    Second-moment exponent as a function of wavenumber in the Eady model with shear fluctuation amplitude ϵ = 1/3 and autocorrelation times tc = 6 and tc = 1. For comparison the bottom curve is the exponent in the absence of fluctuations. The meridional wavenumber is l = 0.

  • View in gallery

    Expected optimal energy growth as a function of optimizing time for the uncertain Eady model with velocity fluctuations of amplitude ϵ = 1/3 and autocorrelation times tc = 2 and tc = 6. Shown are optimal growth given by the equivalent white noise propagator and by the exact propagator. For comparison the optimal growth from the mean propagator with no fluctuations is also shown. The fluctuations generally increase perturbation growth and the equivalent white noise approximation overestimates the growth. The fluctuating wind has vertical structure u(z) = z2. The wavenumbers are k = 3, l = 3. The coefficient of linear friction is r = 0.3

  • View in gallery

    Structure of the optimal perturbation that leads to greatest expected energy at t = 4 in the uncertain Eady model. The amplitude of the fluctuations is ϵ = 1/3 and the autocorrelation time is tc = 6; other parameters are as in Fig. 3: (top) the optimal of the mean operator, which produces energy growth 1.14; (middle) the optimal of the equivalent white noise dynamics, which produces energy growth 3.9; (bottom) the optimal of the exact dynamics, which produces energy growth 1.4

  • View in gallery

    Structure of the first EOF evolved from the sure optimal initial state shown in Fig. 4: (top) the sure evolved optimal of the mean dynamics; (middle) the first EOF of the covariance obtained using the equivalent white noise dynamics; the first EOF accounts for 96% of the evolved optimal covariance; (bottom) the first EOF of the covariance obtained using the exact dynamics. Note that despite the exaggerated growth factor obtained using the equivalent white noise dynamics, the structure of the evolved covariance is well approximated

  • View in gallery

    Optimal expected energy growth as a function of wavenumber, k, for the uncertain Eady model with fluctuation amplitude ϵ = 1/3 and autocorrelation time tc = 6. Shown are the optimal growth at t = 4 obtained from the equivalent white noise propagator, the exact propagator, the propagator of Eq. (66), which would have been exact if 𝗔 and 𝗕 commuted, and the optimal growth from the mean propagator. The fluctuating wind has vertical structure u(z) = z2. The wavenumbers are k = 3, l = 3. The coefficient of linear friction is r = 0.3

  • View in gallery

    (left) Expected energy growth achieved by the optimal perturbations in four time units. Shown are the growth achieved by optimal perturbations obtained using the exact ensemble mean square dynamics (circles), the growth achieved by the optimal perturbations obtained using the equivalent white noise dynamics (crosses), and the growth achieved by the optimals of the mean dynamics in the absence of fluctuations (stars). (right) EOF decomposition of the covariance at t = 4 arising from evolution of the rank-one initial covariance of the top optimal perturbation. Shown are the variance percentage accounted for by the EOFs of the covariance evolved by the exact dynamics (circles), the variance accounted for by the EOFs of the equivalent white noise dynamics (crosses), and the variance accounted for by the EOFs of the mean dynamics (stars). The covariance evolved with the mean dynamics remains rank one and a single EOF accounts for 100% of its variance. The evolved covariance under uncertain dynamics is mixed and spanned by approximately two states. The amplitude of the fluctuations is ϵ = 1/3 and the autocorrelation time is tc = 10; the model and the other parameters are as in Fig. 3

  • View in gallery

    Structure of the first EOF of maintained variance in the Eady model. (top) The first EOF produced by temporally and spatially white noise additive forcing of the mean operator; the maintained energy is 2.2, and the first EOF accounts for 13.9% of the variance. (middle) The first EOF produced in the equivalent white noise approximation by temporally and spatially white noise forcing of the operator associated wind fluctuating about the mean; the fluctuating wind is u(z) = z2, the rms amplitude of the fluctuations is ϵ = 1/3, and the autocorrelation time is tc = 6; the maintained energy is 2.7, and the first EOF accounts for 26% of the variance. (bottom) The exact first EOF for the operator associated with the same fluctuating wind as in the middle panel but with the assumption that the fluctuations are Gaussian with Kubo number K = 2; the maintained energy is 2.6, and the first EOF accounts for 23.8% of the variance; the wavenumbers are k = 3, l = 3; the coefficient of linear friction is r = 0.3

  • View in gallery

    (top) Structure of the first stochastic optimal, which is responsible for producing 11.1% of the total variance when the mean operator of the Eady model is stochastically forced with temporally white additive noise with the spatial structure of the stochastic optimal. (bottom) The structure of the first stochastic optimal in the fluctuating Eady model in the equivalent white noise approximation; the fluctuating wind is u(z) = z2, the rms amplitude of the fluctuations is ϵ = 1/3, and the autocorrelation time is tc = 6; this stochastic optimal is responsible for producing 20.6% of the total variance. The wavenumbers are k = 3, l = 3; the coefficient of linear friction is r = 0.3.

  • View in gallery

    (left) Variance maintained by the first 10 stochastic optimals of the uncertain Eady model: the variance maintained by the equivalent white noise approximation (crosses), and for reference the variance maintained by the mean operator without fluctuations (stars). (right) The percentage of the variance of the uncertain Eady model arising from the first 10 EOFs with stochastic forcing white in space and time: variance explained by the equivalent white noise approximation (crosses), variance explained by the mean operator (stars); the fluctuating wind is u(z) = z2, the rms amplitude of the fluctuations is ϵ = 1/3 and the autocorrelation time is tc = 10; the wavenumbers are k = 3, l = 3; the coefficient of linear friction is r = 0.1

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 114 111 4
PDF Downloads 6 6 0

Perturbation Growth and Structure in Uncertain Flows. Part II

View More View Less
  • 1 Department of Earth and Planetary Sciences, Harvard University, Cambridge, Massachusetts
  • | 2 Department of Physics, National and Capodistrian University of Athens, Athens, Greece
© Get Permissions
Full access

Abstract

Perturbation growth in uncertain systems associated with fluid flow is examined concentrating on deriving, solving, and interpreting equations governing the ensemble mean covariance. Covariance evolution equations are obtained for fluctuating operators and illustrative physical examples are solved. Stability boundaries are obtained constructively in terms of the amplitude and structure of operator fluctuation required for existence of bounded second-moment statistics in an uncertain system. The forced stable uncertain system is identified as a primary physical realization of second-moment dynamics by using an ergodic assumption to make the physical connection between ensemble statistics of stable stochastically excited systems and observations of time mean quantities. Optimal excitation analysis plays a central role in generalized stability theory and concepts of optimal deterministic and stochastic excitation of certain systems are extended in this work to uncertain systems. Remarkably, the optimal excitation problem has a simple solution in uncertain systems: there is a pure structure producing the greatest expected ensemble perturbation growth when this structure is used as an initial condition, and a pure structure that is most effective in exciting variance when this structure is used to stochastically force the system distributed in time.

Optimal excitation analysis leads to an interpretation of the EOF structure of the covariance both for the case of optimal initial excitation and for the optimal stochastic excitation distributed in time that maintains the statistically steady state. Concepts of pure and mixed states are introduced for interpreting covariances and these ideas are used to illustrate fundamental limitations on inverting covariances for structure in stochastic systems in the event that only the covariance is known.

Corresponding author address: Dr. Brian F. Farrell, Division of Engineering and Applied Sciences, Harvard University, Pierce Hall, 29 Oxford St., Cambridge, MA 02138. Email: farrell@deas.harvard.edu

Abstract

Perturbation growth in uncertain systems associated with fluid flow is examined concentrating on deriving, solving, and interpreting equations governing the ensemble mean covariance. Covariance evolution equations are obtained for fluctuating operators and illustrative physical examples are solved. Stability boundaries are obtained constructively in terms of the amplitude and structure of operator fluctuation required for existence of bounded second-moment statistics in an uncertain system. The forced stable uncertain system is identified as a primary physical realization of second-moment dynamics by using an ergodic assumption to make the physical connection between ensemble statistics of stable stochastically excited systems and observations of time mean quantities. Optimal excitation analysis plays a central role in generalized stability theory and concepts of optimal deterministic and stochastic excitation of certain systems are extended in this work to uncertain systems. Remarkably, the optimal excitation problem has a simple solution in uncertain systems: there is a pure structure producing the greatest expected ensemble perturbation growth when this structure is used as an initial condition, and a pure structure that is most effective in exciting variance when this structure is used to stochastically force the system distributed in time.

Optimal excitation analysis leads to an interpretation of the EOF structure of the covariance both for the case of optimal initial excitation and for the optimal stochastic excitation distributed in time that maintains the statistically steady state. Concepts of pure and mixed states are introduced for interpreting covariances and these ideas are used to illustrate fundamental limitations on inverting covariances for structure in stochastic systems in the event that only the covariance is known.

Corresponding author address: Dr. Brian F. Farrell, Division of Engineering and Applied Sciences, Harvard University, Pierce Hall, 29 Oxford St., Cambridge, MA 02138. Email: farrell@deas.harvard.edu

1. Introduction

Dynamical equations for evolution of ensemble mean fields under uncertain dynamics were obtained in Farrell and Ioannou (2002, hereafter Part I). Specifically, for the uncertain linear system
i1520-0469-59-18-2647-e1
an exact equation for the ensemble mean state 〈ψ(t)〉 initialized at t = 0 was obtained, where the bracket 〈·〉 denotes the ensemble mean over realizations of the O(1) random variable ξ(t). In (1), A is the ensemble mean operator, ϵ is the amplitude of the operator fluctuations, and 𝗕 is the matrix of the fluctuation structure. The random variable ξ(t) is assumed to be stationary with zero mean, unit variance, and autocorrelation time tc = 1/ν, that is, 〈ξ(t + τ)ξ(t)〉 = eντ. For general A and 𝗕 the equation for the ensemble mean state initialized at t = 0 is
i1520-0469-59-18-2647-e2
which for 𝗔 and 𝗕 commuting reduces to
i1520-0469-59-18-2647-e3
Dynamical equations (2) and (3) are exact for Gaussian ξ(t) and when the fluctuations are Gaussian we replace the general random variable ξ by its Gaussian counterpart η. It was shown in Part I of this paper that (2) and (3) remain accurate approximations for the physically important class of ξ that are Gaussian up to fluctuation amplitude Ξ0, with zero probability of values greater than Ξ0, provided that the Kubo number associated with the fluctuations, defined as K = ϵtc, is less than Ξ0. This is important because physical quantities are typically bounded in variation while the Gaussian is not, so that predictions depending on unbounded variation inherent in an assumed Gaussianity of the operator fluctuations are not physical.
Notable is the short autocorrelation time limit of (2) (Arnold 1992; Sardeshmukh et al. 2001):
i1520-0469-59-18-2647-e4
This equation is generally valid when the autocorrelation time of the fluctuations is short enough and the Kubo number K = ϵtc ≪ 1 (Van Kampen 1992). In particular, this equation governs the evolution of 〈ψ〉 for fluctuations that are temporally white (Arnold 1992). For nonwhite fluctuations, (4) may be regarded as an approximate evolution equation valid for small Kubo numbers that will be referred to as the equivalent white noise approximation. It should be noted that Eqs. (2) and (3) are valid only for evolution from t = 0 and in particular cannot be directly used to obtain the response to continuous forcing, while (4) may be so used.

Having obtained equations for the ensemble mean field, 〈ψ〉, under uncertain dynamics in Part I, we wish now to obtain equations for the ensemble mean covariance, 〈𝗖〉 = 〈ψψ〉, under similarly uncertain dynamics.

2. Covariance dynamics for operators that are certain but with uncertain initial conditions or uncertain additive forcing

We begin with a review of covariance dynamics of certain systems, which forms the background for analyzing covariance dynamics of uncertain systems and provides an opportunity to introduce the tensor product notation, which greatly facilitates the analysis (cf. Horn and Johnson 1991). For the reader's convenience the basic properties of the tensor product are reviewed in appendix A.

The instantaneous covariance matrix of the state ψ is 𝗖 = ψψ, where † denotes hermitian transposition. The covariance matrix 𝗖 can be associated with the n2 vector c = ψ* ⊗ ψ, where * denotes complex conjugation and ⊗ denotes the tensor product, with the convention that to each n × n matrix 𝗠, with elements mij, we associate the n2 vector, m, with elements
i1520-0469-59-18-2647-e5
Reversing this process, the elements of an n2 vector are formed into an n × n matrix by breaking the vector into n equal parts and arranging these as the n columns of the matrix. With this convention, matrix equations can be cast as vector equations. For example, 𝗔𝗖 is written using tensor products as (𝗜 ⊗ 𝗔)c, where c is the vector associated with matrix 𝗖 as described above, 𝗜 is the identity matrix, and the tensor product 𝗜 ⊗ 𝗔 is an n2 × n2 matrix, the dimension being appropriate for transforming the n2 vector c. Similarly, the matrix product 𝗖𝗔 is written (𝗔T ⊗ 𝗜)c, where T denotes unconjugated transposition.
Consider a state vector, ψ, which for each realization of the forcing, f(t), satisfies the linear time-dependent equation
i1520-0469-59-18-2647-e6
in which the linear operator, 𝗔(t), is a certain (not random) matrix function of time, and ψ is an n-dimensional column vector.
We see from (6) using the properties of the tensor product that each realization of the vector covariance, c = ψ* ⊗ ψ, satisfies the equation
i1520-0469-59-18-2647-e7
Consider forming the ensemble average of the vector covariance over forcing realizations, with the ensemble average indicated by [c]. Notice that the notation [·] denotes ensembling over forcing realizations rather than over operator realizations, which is indicated by 〈·〉. For delta-correlated stochastic forcing, [f(t)f(s)] = Qδ(ts) with forcing structure matrix Q ≡ [ff], it can be shown1 that the rhs term, representing correlation of the state and the forcing in (7), has ensemble average
fψψfffq
where q is the covariance vector corresponding to covariance matrix 𝗤, so that we obtain from (7) the covariance evolution equation,
i1520-0469-59-18-2647-e9
which is the tensor form of the familiar Lyapunov equation (Farrell and Ioannou 1996, hereafter FI96),
i1520-0469-59-18-2647-e10
The tensor covariance equation (9) makes apparent that the evolution operator of the covariance matrix is the certain Lyapunov superoperator
Lctt
The prefix “super” is introduced as a reminder that this operator acts on matrices associated with the state, albeit matrices in vector form, and not on state vectors themselves. The superoperator has dimension n2 × n2 if the state vector has dimension n. We use the term covariance to refer to the vector c and to the equivalent matrix 𝗖, as appropriate.
Consider evolution of the covariance c(0) = ψ(0)* ⊗ ψ(0) associated with a sure (not random) initial state ψ(0) and for simplicity take the certain (not random) operator 𝗔 to be autonomous. At time t, the covariance is
ctLctc
If the initial conditions are uncertain with ensemble average initial covariance [c(0)], then the ensemble average covariance at time t is
ctLctc
If 𝗔 is time independent and stable, the ensemble average covariance approaches asymptotically:
c–1q
In matrix notation, (14) is the familiar Lyapunov equation (cf. FI96),
i1520-0469-59-18-2647-ex1

a. Pure and mixed states and the interpretation of EOF analysis

When the state of the system, ψ, is known exactly the system is said to be in a pure state. In this case the associated covariance c = ψ* ⊗ ψ has rank one. Conversely, covariances of rank one are associated with systems in a pure state, and this state can be determined within a phase by singular value decomposition (SVD) of the covariance matrix. Because physical states are real, the arbitrariness in the phase of the state is resolved, and rank one covariances can be inverted for a pure structure, within a sign; and association of a covariance with a state is unambiguous if the system is in a pure state.

However, covariances with rank exceeding one can not be inverted for structure and we say that the state of the system is mixed and that the rank of the covariance identifies the number of states involved in producing the covariance. When the covariance of the system is mixed there does not exist an unambiguous preparation of the system with the given covariance and a large set of structures ψ could equally well have been responsible for producing the covariance.

The familiar empirical orthogonal function (EOF) analysis into orthonormal structures is obtained from singular value decomposition of the covariance matrix.2 It is tempting to regard the resulting EOF basis as identifying the states that produced the covariance or at least to regard the EOFs and the singular values of the covariance matrix as having special significance among the possible ensembles of states that could have produced the covariance. Although this is valid for our examples, it is unfortunately not the case in general; for example, consider the 2 × 2 covariance with EOFs e1 with eigenvalue α1 and its orthogonal e2 with eigenvalue α2 and with α1α2. The resulting covariance matrix is
α1e1e1α2e2e2
or in tensor product notation,
cα1e*1e1α2e*2e2
Consider now the quite different states:
i1520-0469-59-18-2647-e18a
It can be verified that an ensemble in which e+ and e appear with equal probability produces the same covariance matrix 𝗖,
i1520-0469-59-18-2647-e19
demonstrating the ambiguity inherent in associating states with covariances. Note, however, that the assumed orthogonality of the original states e1 and e2 has not been preserved by the transformation forming e+ and e, as is immediately verified by forming the dot product e+e = α1α2 ≠ 0.

In general, EOFs are one basis out of an infinity of possible bases that give rise to the same covariance matrix. The EOF's are special in that they form the only3 orthogonal basis, and also in that the EOFs form the optimal basis for 𝗖 in the least squares sense.4 However, there is generally no a priori reason based solely on the covariance to suppose that the EOF basis in fact produced a given covariance matrix.5 Accepting this ambiguity in identifying the states producing a given 𝗖, one may inquire if there exist restrictions on the class of candidate bases for a given covariance.

We can construct an infinity of nonorthogonal states that produce the same covariance matrix 𝗖 by generalizing the simple example given earlier. Consider6 ei to be the ith EOF with associated singular value αi and define the orthogonal but not orthonormal set ψi = (no summation). Then the covariance 𝗖 can be written in vector form as
i1520-0469-59-18-2647-e20
Consider now a unitary matrix 𝗨 (𝗨𝗨 = 𝗜) that associates the set of states ψi with a new set of states by ϕi = Σj Uijψj (note that the sum acts on the the states themselves and not their components, so that ϕi = Σj U*ijψj). Then the states ψi and ϕi form the same covariance 𝗖. Indeed,
i1520-0469-59-18-2647-e21
This unitary matrix of coefficients is the maximum allowed freedom in the choice of the ensembles (Schrödinger 1936; Hughston et al. 1993). Also, it is easily seen that ψi form the only orthogonal basis because the states ϕi cannot be orthogonal as
i1520-0469-59-18-2647-e22
unless the ψi are orthonormal and 𝗖 is the identity.

We conclude that there is a unique orthogonal basis for 𝗖 that is the EOF's, and another nonorthogonal candidate basis for every possible unitary transformation. While in general there is no a priori reason based on the covariance alone to conclude that the orthogonal basis produced a given 𝗖, if the additional assumption is made of Gaussianity of the pdf of states, based, for example, on observation or on the system being linear and forced by a spatially and temporally white noise process, then the EOF basis is required for representation of the Gaussian pdf in EOF coordinates. On the other hand, we may have observational evidence for systematic occurrence of preferred states associated with one of the possible nonorthogonal bases; however, this would imply a non-Gaussian pdf for the states.

Advancing the covariance matrix under certain dynamics does not alter the initial rank of the covariance. This follows from the uniqueness of solutions of certain differential equations. To each sure initial covariance c(0) = ψ(0)* ⊗ ψ(0) produced from the pure state ψ(0) corresponds at a time later, t, the covariance c(t) = ψ(t)* ⊗ ψ(t), which is also rank one. Similarly, if the initial covariance has rank rn, it can be expanded in its EOF basis:
i1520-0469-59-18-2647-e23
where ei(0) are the orthogonal eigenfunctions of c(0) and all λi > 0. At time t, c(t) becomes
i1520-0469-59-18-2647-e24
where all the ei(t) = P(t)ei(0) are distinct, because of the uniqueness of the solutions, and as a result the rank of the covariance is preserved under certain dynamics.
It may prove useful for the development that follows to isolate the property of the certain superoperator (11),
LcIttI
that leads to the invariance of rank. Consider again the rank one initial covariance c(0) = ψ(0)* ⊗ ψ(0), which at t is advanced to c(t) = P (t)c(0), where P (t) is the propagator associated with the certain superoperator Lc. Because c(t) = ψ(t)* ⊗ ψ(t) we can express the propagator of the superoperator P (t) in terms of the propagator P(t), which advances ψ(0) as
PtPtPt
This is true because
i1520-0469-59-18-2647-e27
The property of the certain superoperator that makes possible the splitting (26) of the propagator of the covariance dynamics as the tensor product of the propagators of the dynamics P(t) is the commutation7 of I ⊗ 𝗔 and 𝗔* ⊗ I. Under uncertain dynamics the superoperator cannot be split into commuting parts and as a result the propagator P (t) is not the tensor product of the propagators as in (26) with the consequence that the rank of the covariance is not preserved. As a result under uncertain dynamics, if initially we start with a pure state, at a later time the state of the system will become mixed.

3. Covariance dynamics of uncertain systems

Consider an uncertain mean flow with ensemble mean operator 𝗔, and fluctuation operator ϵη(t)𝗕, with η(t) a Gaussian zero mean, unit variance process with autocorrelation time tc. According to (9) with 𝗔(t) = 𝗔 + ϵη(t)𝗕 for each realization of the uncertain flow the covariance formed after averaging over the realizations of the (additive) forcing evolves as
i1520-0469-59-18-2647-e28
in which the symbol [·], denoting the ensemble average over (additive) forcing realizations, has been suppressed. The mean and fluctuation superoperators in (28) are
i1520-0469-59-18-2647-e29
The ensemble average evolution equation for the covariance, c over realizations of η, indicated by 〈·〉, and valid for evolution from t = 0, is obtained by an argument similar to that leading to (2) with result
i1520-0469-59-18-2647-e30
where
i1520-0469-59-18-2647-e31
Repeated application of the tensor product property [Eq. (A2)] gives
i1520-0469-59-18-2647-e32
where
i1520-0469-59-18-2647-e33
and the ensemble average covariance evolution equation can be written
i1520-0469-59-18-2647-e34
in which appears the generalized Lyapunov superoperator L:
i1520-0469-59-18-2647-e35
As was previously demonstrated in the argument leading to (4), for short autocorrelation times (35) reduces to the familiar equivalent white noise superoperator:
i1520-0469-59-18-2647-e36
In matrix notation the ensemble covariance dynamical equation corresponding to (34), which is valid for any fluctuation autocorrelation time, tc = 1/ν, and for evolution from t = 0 is
i1520-0469-59-18-2647-e37
The short autocorrelation limit of (37) is (Arnold 1992: Sardeshmukh et al. 2001)
i1520-0469-59-18-2647-e38
In the following the 〈·〉 symbol indicating ensemble average over operator realizations is also suppressed, so that when 𝗖 appears it is understood to be the ensemble average covariance with the ensemble average taken over the forcing realizations and/or operator realizations as appropriate.

a. Steady-state covariance maintained by stochastic forcing

Under the ergodic assumption the asymptotic ensemble average covariance, when it exists, equals the covariance obtained by taking a long time average of a single realization of the flow. It exists when the uncertain superoperator L is asymptotically decaying in which case the forced covariance is
i1520-0469-59-18-2647-e39
where P(t, s) is the propagator associated with the uncertain ensemble mean covariance dynamics.
In the short autocorrelation time limit if the generalized Lyapunov operator, Lw, is asymptotically stable and the ensemble mean operator, 𝗔, is time independent, then (39) converges to
i1520-0469-59-18-2647-e40
In equivalent matrix notation the equilibrium covariance satisfies the generalized Lyapunov equation for steady state 𝗖 (Arnold 1992):
i1520-0469-59-18-2647-e41
Note that the familiar Lyapunov equation (15) has been modified in two ways: the mean operator has been augmented by a term proportional to 𝗕2, and the term 𝗕𝗖𝗕 appears.

b. Examples of covariance dynamics

The streamfunction for the barotropic flow with fluctuating wind example introduced in Part I is governed by
i1520-0469-59-18-2647-e42
Consider a single wave with wavenumber, k, in which case the mean operator is 𝗔 = –ikU0r, the fluctuation operator is 𝗕 = –ik, and 𝗕 commutes with 𝗔. In this case both superoperators B = 𝗜 ⊗ 𝗕 + 𝗕* ⊗ 𝗜 and D = 𝗜 ⊗ 𝗗(t) + 𝗗*(t) ⊗ 𝗜 vanish in (35), so the mean covariance evolves as in the certain problem, and the asymptotic covariance maintained by the fluctuating operator is identical to that maintained by the mean operator. From a physical perspective the reason is that wind fluctuations in this example change the phase of the waves but do not affect the wave structure or growth. As a result each realization has the same square amplitude and the moment growth rates are identical. From the point of view of the covariance equation, phase averaging in the ensemble mean equation accounted for by the 𝗕2 term in (37) or (38) is exactly canceled by the 𝗕𝗖𝗕 term.
The effect of the fluctuations is clarified by considering the equation for the variance, which is trace (𝗖) = trace(ψψ) = ψψ:
i1520-0469-59-18-2647-e43
If the operator 𝗕 + 𝗕 vanishes identically, as is the case for barotropic wind fluctuation, then the evolution of ψψ is not influenced by the fluctuations. Consider the ensemble average of (43) obtained by taking the trace of (37):
i1520-0469-59-18-2647-e44
which has the short autocorrelation limit
i1520-0469-59-18-2647-e45
It is important to notice that the influence of operator fluctuations on the ensemble mean variance depends on the symmetrized operator 𝗕 + 𝗕 rather than on 𝗕 itself, indicating that the fluctuations contribute to second-moment perturbation growth through the hermitian part of the fluctuation operator. Through this term fluctuations of sufficient amplitude generically render the covariance equation asymptotically unstable. For example, if fluctuation in the coefficient of Rayleigh friction were accounted for by 𝗕 = –1, then the ensemble average covariance equation in the short autocorrelation time limit becomes
i1520-0469-59-18-2647-e46
making the second moment unstable for fluctuations of rms magnitude ϵ > (rν/2)1/2, with second-moment exponent λ2 = –r + 2ϵ2/ν.
Consider two waves, with wavenumbers k1 and k2, and fluctuation operator
i1520-0469-59-18-2647-e47
In this example the fluctuation superoperators B and B2 do not vanish, but the variance remains the same as that maintained by the mean operator, 𝗔, because 𝗕 + 𝗕 = 0 and the second-moment exponent is equal to the growth rate of the mean operator. However, the fluctuations may result in EOFs differing from those obtained in the absence of fluctuations.

c. Further properties of the Lyapunov superoperator

We wish to further analyze second-moment stability and gather tools needed to obtain optimal excitation structures for both deterministic and stochastic forcing of uncertain systems. First, we must collect some properties of the Lyapunov superoperator.

A general matrix could be advanced in time with any of the following: the certain Lyapunov superoperator Lc given by (11); the generalized Lyapunov superoperator, L, given by (35); or its white noise form, Lw, given by (36). The matrices of interest, however, are physically realizable covariance matrices that have the properties of being hermitian and positive definite. Care must be taken in the analysis that only physically realizable solutions to (37) or (38) are retained. Specifically, the individual eigenfunctions of the Lyapunov superoperator, L, are not necessarily either hermitian or positive definite, and physically realizable covariances must be composed of superpositions of eigenfunctions that do have these properties. For example, consider the optimal perturbation problem: determine the initial covariance matrix leading to maximum ensemble mean energy growth at a chosen time. This optimization is complicated by the fact that it must be done in the set of positive definite hermitian matrices, a convex subset of the set of complex matrices that does not form a subspace and as a result is not spanned by a linear basis.8

We require some basic properties of the superoperator L. First, L preserves the hermiticity of 𝗖. This is easily seen by noticing that the hermitian transpose of a covariance matrix, 𝗖(t), obeys the same dynamical equation as does the difference 𝗖d(t) = 𝗖(t) – 𝗖(t), so that if initially 𝗖 is hermitian, that is, 𝗖d(0) = 0, then by necessity 𝗖d(t) = 0 for all times. Similarly, L preserves the antihermiticity of an antihermitian matrix. Second, L (or Lc) preserves the positivity of 𝗖. This property is easy to derive if we step back to the physical origin of the ensemble average equations. Consider a positive definite initial covariance matrix 𝗖(0). Decompose 𝗖(0) by SVD into its EOFs in the form
i1520-0469-59-18-2647-e48
where λi can be identified with the positive eigenvalues of 𝗖(0) and ei with the corresponding eigenvectors. Because the sum of any positive multiple of positive definite matrices is positive definite, it is sufficient to prove that each e*iei evolves to a positive definite matrix. But this must be so, for if 𝗣(t) denotes the propagator associated with any realization of the flow, the covariance at time t is 𝗖(t) = 〈𝗣(t)*e*i ⊗ 𝗣(t)ei〉, which is positive definite.

Third, the eigenvalue of L with maximum real part is necessarily real and as a result L satisfies the principle of exchange of stabilities (Drazin and Reid 1981). For if the maximally growing eigenvalue had an imaginary part a positive definite initial covariance matrix would, with time, acquire negative diagonal entries, which is not possible.

A basic difference between the certain superoperator Lc and its generalized counterparts L is that while the former preserves the rank of the covariance matrix, the latter do not. Consider a sure initial state with a (necessarily) rank-one covariance. Each realization of the operator fluctuations yields a pure state, but under fluctuations the ensemble-averaged state becomes mixed, with covariance rank greater than one (see Fig. 1). This property of increase of the rank of the covariance can be traced to the 𝗕𝗖𝗕 term or its finite tc generalization in the ensemble average covariance equations (37)–(38).

d. Determining second-moment stability boundaries for an uncertain system

The mechanism underlying second-moment destabilization is transparent in the case of commuting 𝗔 and 𝗕. Although restrictive, commuting 𝗔 and 𝗕 has example in the meteorological context: in the midlatitudes a time modulation of the jet strength is commonly observed and if the mean jet is idealized as a velocity profile linearly increasing with height and the jet fluctuations are assumed to be of the same form, then the operator 𝗔 + ϵ𝗕η(t) governing evolution of perturbations commutes with the mean operator, 𝗔. That 𝗔 and 𝗕 commute assures that the fluctuations do not change the sample (Lyapunov) stability of 𝗔 (Farrell and Ioannou 1999). However, perturbation energy, regarded as an ensemble average, is a second-moment quantity and may grow without bound.

For commuting 𝗔 and 𝗕 the generalized Lyapunov superoperator for all autocorrelation times is given by the asymptotic form of the operator, which is identical to the short autocorrelation time superoperator (36):
i1520-0469-59-18-2647-e49
Eigenanalysis of this operator is immediate; for if ϕi are the eigenfunctions of 𝗔 and 𝗕 with eigenvalues λi and μi, respectively, the eigenfunctions of Lw are ϕ*iϕj with eigenvalues
i1520-0469-59-18-2647-e50
It follows that the maximally growing eigenfunction cmax is composed of the same eigenfunction from each of 𝗔 and 𝗕 and so has this common structure, that is, cmax = ϕ*αϕα. The greatest second-moment growth rate according to (50)9 is
i1520-0469-59-18-2647-e51
This demonstrates the generally valid property that the eigenvalue of L with greatest real part is itself real, and that for commuting 𝗔 and 𝗕 a stable mean operator 𝗔 with ℜ(λα) < 0 becomes second-moment unstable for
i1520-0469-59-18-2647-e52
This shows that any stable mean operator 𝗔 is made second-moment unstable by sufficiently large fluctuations with structure matrix 𝗕 that commutes with 𝗔, requiring only that 𝗕 have nonvanishing real eigenvalue. The structure most second-moment unstable is not necessarily the most unstable eigenfunction of 𝗔 but rather the eigenfunction of 𝗔 for which ℜ(λα) + 2ϵ2/ν[ℜ(μα)]2 is maximized.

As an example consider the second-moment stability of the inviscid Eady model with modulations of the shear described in Part I. The second-moment exponent rate as a function of zonal wavenumber, k, for the case of l = 0, is shown in Fig. 2 for Gaussian shear fluctuations of rms magnitude ϵ = 1/3 and autocorrelation times tc = 1 and tc = 6. For comparison the growth rate of perturbations on the mean wind profile is also shown. The increase of the second-moment exponent occurs only in the unstable region of the operator. This is because for waves shorter than the short-wave cutoff the perturbation operator has an imaginary spectrum and from (50) commuting fluctuations cannot increase the growth rate of the certain operator. In the unstable region the fluctuating shear varies the growth rate, which produces an asymptotic second-moment exponent exceeding that of the mean operator. Because this is a commuting example, the growth rate of the second moment is the same for all autocorrelation times as that predicted by the equivalent white noise operator. The growth rates increase with autocorrelation time in accord with the increase of the equivalent white noise variance ϵ2tc.

When 𝗔 and 𝗕 do not commute the second-moment growth rates cannot be directly related to the growth rates of the constituent operators 𝗔 and 𝗕, and the asymptotic stability of the Lyapunov superoperator depends on the autocorrelation time of the fluctuations [cf. (33)]. Analysis of such cases requires explicit calculations of the stability of the generalized superoperator but in realistic problems this calculation is intractable because the Lyapunov superoperator for a system of dimension n has dimension n4. We seek a method to efficiently estimate the critical fluctuation magnitude necessary to produce second-moment instability as well as the structure of the marginally stable covariance. Such a method is described below and demonstrated for the short autocorrelation form of the superoperator. With small adjustments it can be adapted to calculate the stability of the superoperator for any autocorrelation time provided that limt→∞ 𝗗(t) = 𝗗 exists so that the superoperator becomes asymptotically autonomous.

Denote by L0 the linear superoperator associated with the certain operator 𝗔,
L0
and assume that 𝗔 is asymptotically stable. Denote by L1 the linear superoperator associated with the uncertain fluctuation operator 𝗕,
i1520-0469-59-18-2647-e54
and compose from L0 and L1 the generalized Lyapunov superoperator
LwL0α2L1
where α2 ≡ 2ϵ2/ν. Because the eigenvalue with maximum real part of (55) is by necessity real, the generalized Lyapunov superoperator L at marginal stability must have a zero eigenvalue by the principle of exchange of stability. Exploiting this fact the noise level, α2c, necessary for marginal stability can be determined by eigenalysis of the superoperator Lu = –L0–1L1. For if λmax is the maximum eigenvalue of Lu (necessarily real and positive), then the critical αc is given by α2c = 1/λmax. This follows because at marginal stability the eigenvalue with maximum real part of the superoperator,
i1520-0469-59-18-2647-e56
is zero, and consequently at marginal stability the eigenvalue of Lu with maximum real part is λmax = 1/α2. The eigenvalues, λ, of Lu and its eigenfunctions, 𝗖, can be determined by solving the equation
i1520-0469-59-18-2647-e57
The eigenfunctions of this equation consist of the positive definite covariances (the positivity is assured by the stability of 𝗔) with the property that the stochastic forcing of the mean operator [proportional to the lhs of Eq. (57)] required for statistical equilibrium is equal to the stochastic forcing produced by the equilibrium covariance under the influence of disturbance by fluctuations with structure 𝗕 [given by the rhs of Eq. (57)]. The marginally unstable eigenfunction can be found by solving this eigenproblem (it can be readily solved by the power method) with the advantage that this calculation does not require knowing the critical value of the fluctuation magnitude, ϵ = (ν/2λmax)1/2.

e. Solving the generalized Lyapunov equation for the statistically steady covariance in an uncertain system

For fluctuation levels, ϵ, for which the uncertain Lyapunov operator is stable we can determine the stationary response covariance, 𝗖, maintained by additive stochastic forcing with forcing covariance 𝗤. This stationary response covariance satisfies the generalized Lyapunov equation (cf. 41):
i1520-0469-59-18-2647-e58
This equation can be written using the definitions of L0 and L1 introduced earlier as
L0α2L1
We proceed to solve (59) by expanding 𝗖 in the perturbation series:
0α21α42
The terms in this series can be evaluated recursively by solving the Lyapunov equations:
i1520-0469-59-18-2647-e61
The solution of each of these Lyapunov equations is assured to yield physically realizable covariances, that is, hermitian and positive definite, because 𝗔 is assumed stable. Therefore, the stationary covariance obtained by summing the series, if the series converges, is also hermitian and positive definite. This stationary covariance is
i1520-0469-59-18-2647-e62
where, as previously, Lu = –L0–1L1. The series is assured to converge if the variance of the fluctuations α satisfies α2λmax < 1, where λmax is the maximal eigenvalue of Lu, which is real and positive because this system obeys the principle of exchange of stability. This expansion confirms that the critical fluctuation magnitude leading to marginal stability is α2c = 1/λmax and that for α < αc a stationary state can be obtained under stochastic forcing.

Although the analysis above exploited the assumption of delta-correlated fluctuations, the methods presented can be used to obtain similar results for more general forms of fluctuation.

4. Optimal perturbations and stochastic optimal forcing for uncertain systems

In order to obtain optimal perturbations under uncertain dynamics we must first specify a norm to measure the covariances. A natural norm is the variance
While the trace of a matrix does not define a norm for general matrices, it does define a norm for physically realizable positive definite hermitian matrices for which it is always true that trace(𝗖) ≥ 0 with equality only when 𝗖 is identically zero. This norm is chosen in part because for covariances produced by pure states it is identical with the square of the Euclidean norm of the state, which in energy coordinates is the perturbation energy. Because any sum of positive definite covariances is also positive definite, this norm is linear when it acts on sums of covariances, that is,  ‖ 𝗖 + 𝗗 ‖  =  ‖ 𝗖 ‖  +  ‖ 𝗗 ‖ , while for covariance differences this norm satisfies the triangle inequality,  ‖ 𝗖 – 𝗗 ‖  ≤  ‖ 𝗖 ‖  +  ‖ 𝗗 ‖ .

Recall that an n2 vector c corresponds to the n × n covariance matrix 𝗖. The trace norm on 𝗖 is evaluated on the vector covariance c by setting  ‖ c ‖  = Tc, where the trace superoperator T is defined as the 1 × n2 matrix T1i = δi,(k–1)n+k for k = 1, … , n. Clearly, the trace norm of the vector c is distinct from the Euclidean norm. It is important to note that this linear norm on the covariances is not related to an inner product, and as a result orthogonality between covariances is not defined.

A norm that defines an inner product on the covariances with orthogonality properties is the Frobenius norm. The Euclidean norm on vector covariances,  ‖ c ‖ F = , corresponds to the Frobenius norm on covariance matrices,
i1520-0469-59-18-2647-e64
For covariance matrices of rank one, the associated vector covariance is c = ψ* ⊗ ψ for a pure state ψ, and in such cases the Frobenius norm of 𝗖 is equal to the trace norm of 𝗖, and the dot product between two rank-one covariance matrices is equal to the square of the magnitude of the dot product of the corresponding state vectors.

In the previous sections we have determined the superoperator that advances an initial covariance c(0) to the covariance t units of time later [c(t) = P(t)c(0)] and also the superoperator that produces the steady-state covariance c(∞) from the stochastic forcing covariance q(c(∞) = {limt→∞ t0 P(t, s) ds}q), whenever this is defined. In all cases we have a mapping from an initial covariance ci to a final covariance cf(cf = Mci), and the optimization problem is to determine the initial ci of unit magnitude that produces the cf of largest magnitude. If M is the propagator associated with the Lyapunov superoperator, we interpret the optimal as the initial covariance ciopt that produces the largest final covariance cf. If M is the mapping from forcing structure matrix q to the equilibrium-maintained covariance c(∞), then the optimal qopt identifies the forcing structure matrix producing the largest maintained variance.

If covariances are measured in the Frobenius norm,  ‖ 𝗖 ‖ F, then the optimization problem is solved by singular value decomposition of the mapping from initial to final covariances in which the mapping is factored as M = 𝗨Σ𝘃. If the diagonal elements of Σ, σi, are ordered in descending magnitude, then the first column of V is the optimal covariance ciopt (or qopt for the stochastic optimal) in the Frobenius norm and the optimal amplification is the first singular value:
i1520-0469-59-18-2647-e65
Note that this optimal covariance is not necessarily rank one so that the covariance need not be associated with a pure state and the optimal growth need not be realizable by a single integration of the model. Such a mixed optimal covariance can be physically prepared only by separate initializations of the system with the pure states that give rise to this covariance and subsequently forming the ensemble covariance of these simulations. Although obtaining the optimal covariance in the Frobenius norm is straightforward, operationally this norm is less physically relevant as a measure of the covariance than is the trace because it maximizes the square sum of all the elements of the covariance matrix, which is not commonly associated with a physical quantity, while the trace of the covariance matrix is the square sum of the diagonal elements and can be identified with the physical quantity variance.

It is useful to consider SVD of the composite operator formed by the mapping M and the trace superoperator T (TM, rather than of the mapping M alone). Because TM projects n2-dimensional covariances to the reals it has only one nonzero singular value; we denote this nonzero singular value by σ, and the associated singular vector by υ, so that TM = συ. This SVD reveals that the action of TM is just taking the inner product with υ. The covariance matrix corresponding to υ, V, almost succeeds in identifying the optimal initial covariance, as it is among initial covariances with unit Frobenius norm ( ‖ 𝗖i ‖ F = 1) the one producing maximum final variance max ( ‖ 𝗖f ‖ ). However, it does not succeed in identifying the positive definite hermitian covariance of unit variance ( ‖ 𝗖i ‖  = 1) leading to maximum final variance. Remarkably, this optimal covariance is the rank-one covariance formed from the first EOF of V and the growth is σα1 with a1 the first singular value of V. This result, which does not obtain unless V is both hermitian and positive definite, is proven in appendix B. In appendix C an alternative proof by construction of the optimal covariance is given that has the advantage of being suitable for computation of the optimals in large systems.

In certain dynamics optimal perturbations can be ordered according to growth, or stochastic optimals according to contribution to variance, but such an ordering cannot be performed with covariances, as orthogonality is not defined for the trace norm because the trace norm is not associated with an inner product. However, we can form an orthogonal set of optimal perturbations arising from pure state covariances and in this way identify a set of optimal perturbations for uncertain dynamics. The second pure state initial condition, orthogonal to the first, that produces the second largest growth is the second EOF of V and its growth is σα2. Proceeding in this way we obtain an orthonormal basis ordered according to variance amplification, analogous to that obtained for deterministic propagators by SVD of the deterministic propagator. However, note that we do not obtain in this way a second optimal covariance matrix, but only the second optimal pure initial state, because as previously mentioned there is no meaning to a second optimal covariance as the set of positive definite hermitian matrices do not form an inner product space in the trace norm.

Consider the statistically steady ensemble average covariance of a stochastically forced system, which, as discussed earlier, is equal under the assumption of ergodicity to the time average covariance of a single realization of the system as would be accumulated by a time-averaging instrument. We have seen that the structure of stochastic forcing producing the largest maintained variance is a pure state given by the first EOF of TM with M = limt→∞ t0 P(t, s) ds.

Consider the problem of determining ensemble forecast perturbations when the tangent linear forecast error system is uncertain, perhaps because of parameter uncertainty (Palmer 2001). One method for choosing ensemble members is to take optimal perturbations (Molteni et al. 1996; Ehrendorfer and Tribbia 1997). In the presence of uncertainty we have shown that the optimal producing greatest expected growth is a covariance of rank one. This implies that there is a single optimal perturbation for the uncertain forecast system that produces the greatest expected error growth; subsequent pure state optimals could be chosen to complement the basis set of optimal perturbations that compose the ensemble.

5. Second-moment growth and stochastically maintained variance: Optimals and stochastic optimals in the uncertain Eady problem

Consider the Eady problem presented in Part I, but for simplicity with wind fluctuations confined to the form u(z) = z2 and rms amplitude ϵ = 1/3. Take autocorrelation times tc = 2 and tc = 6 in order to assess the role of finite correlations and the range of validity of the equivalent white noise approximation (36) to the exact uncertain Lyapunov superoperator (35). For simplicity take Rayleigh damping with damping timescale chosen to stabilize the model so that it reaches a statistically steady state under stochastic forcing.

Consider first determining the perturbation producing the largest expected variance (trace) of 𝗖(t) over all matrices 𝗖(0) with unit initial variance. The map that connects 𝗖(t) and 𝗖(0) is the uncertain propagator P(t) and the optimal initial covariance matrix we seek is the rank-one initial covariance formed by the first EOF of the covariance associated with right singular vector, V, of TP(t), where T is the trace superoperator. The variance growth over t achieved by this optimal perturbation has magnitude σα1, where σ is the singular value of TP(t) and α1 is the largest eigenvalue of the covariance associated with the singular vector V. The remaining set of mutually orthogonal optimal perturbations each produces variance growth of σαi, where αi are the remaining eigenvalues of V. While the optimal initial covariance matrix is of rank one, at later times the evolved optimal covariance matrix becomes mixed and its structure can be described using EOF analysis.

Optimal variance growth as a function of optimizing time for the Eady model is shown in Fig. 3. The optimal growth obtained using the exact covariance dynamics [Eq. (37)] is compared with that obtained using the equivalent white noise dynamics [Eq. (38)] that is formally valid for sufficiently short autocorrelation times, tc. A further comparison with the optimal growth attained by the mean zonal flow Eady operator in the absence of fluctuations is also shown. Note that fluctuations increase the expected variance growth. While the equivalent white noise approximation overestimates the growth potential, it identifies the correct structure both for the optimal perturbation and the evolved optimal covariance. This is shown for the t = 4 optimal and the first EOF of the evolved optimal covariance at the optimizing time in Figs. 4 and 5, respectively. Note that optimals in the fluctuating Eady model are concentrated near the upper boundary where fluctuations of the shear are largest.

The inaccuracy in the optimal growth estimate made by the white noise approximation is not due primarily to overestimation of the fluctuation operator magnitude, ϵ2B2/ν, but rather to assuming that this value is attained from the start. We know from the discussion of the ensemble mean dynamics [cf. Eq. (3)] that the ensemble mean correction requires a time of order tc to attain its asymptotic form. A more accurate approximation of optimal growth for short times is obtained if the ensemble mean fluctuation correction is allowed to build up over time according to
i1520-0469-59-18-2647-e66
which would be exact if A and B commuted. Remarkably, the optimal growth obtained from the propagator associated with the equivalently commuting (66) accurately tracks the optimal growth obtained from the propagator of exact superoperator (35) (cf. Fig. 3). The accuracy of the growth calculated from (66) can also be seen in Fig. 6, which shows the expected optimal energy growth attained in four time units as a function of wavenumber.

While in an uncertain system we cannot determine a set of orthogonal covariances ordered according to growth potential, we can determine a set of orthogonal optimal perturbations ordered according to growth potential. The growth resulting from these optimal perturbations is shown for the t = 4 optimization in the left panel of Fig. 7.

While the optimal covariance has rank one, and therefore can be identified within a phase with a pure optimal perturbation, the evolved covariance is mixed by the uncertain dynamics. The rank of the covariance is approximately equal to the number of EOFs of the covariance with appreciable variance. In the right panel of Fig. 7 the EOF spectrum of the evolved optimal covariance for the t = 4 optimal is shown at the optimizing time. Under uncertain dynamics the sure initial covariance becomes mixed with approximately rank two.

We turn now to the steady-state variance maintained in the uncertain Eady problem under spatially and temporally white forcing. Although in this example operator fluctuations only marginally increase the variance, the variance structure shifts markedly to the upper level (Fig. 8). The origin of upper-level short waves in the atmosphere lacks persuasive theoretical explanation and we are pursuing this mechanism of stochastic destabilization.

We seek the optimal forcing covariance matrix with the property that when the associated structure is forced white in time the maximum variance is maintained; this structure is called the stochastic optimal. Finding the stochastic optimal requires, according to (40), obtaining the optimal for the map M = –Lw–1, where Lw is the asymptotic superoperator of the covariance dynamics for short autocorrelation times. The structure of the first stochastic optimal for the exact operator and for the equivalent white noise approximation to this operator is shown in Fig. 9. The stochastic optimals in this uncertain Eady model have largest amplitude at upper levels where shear fluctuations are largest. The variance maintained by the stochastic optimals is shown in Fig. 10 (left panel), and the decomposition of the maintained covariance matrix into its EOF components is shown Fig. 10 (right panel).

While in this example the stochastic optimals were obtained by singular value decomposition of L–1, this is computationally expensive and there is an alternative method for finding these optimals by eigenanalysis of the generalized back Lyapunov equation in direct analogy with the method of solution used for certain dynamics. This method and the back Lyapunov equation for uncertain dynamics are described in appendix C.

6. Discussion and conclusions

In Part I of this paper we examined ensemble mean stability of uncertain systems from the perspective of generalized stability theory (GST) and in Part II we addressed second-moment stability properties of uncertain systems from this perspective. We first expressed the equations governing covariance dynamics of certain operators in tensor product notation and then extended these to uncertain operators. We obtained stability boundaries for second-moment growth in uncertain systems in terms of the amplitude and structure of operator fluctuations. A physical implication of these results is that for a stochastically forced linear system to have finite maintained variance its parameter fluctuations must lie within these bounds.

Analysis of the stochastically forced uncertain system allows extension of previous results on the variance, fluxes, and structures in statistically steady turbulent flows (cf. Farrell and Ioannou 1994, 1995, 1998) to take account of uncertainty in the operator.

The time mean covariance of a stochastically forced uncertain system can be obtained from second-moment analysis if an ergodic assumption connecting the ensemble and time means of stable forced systems is made. From this perspective second-moment ensemble mean stability can be interpreted as a necessary condition for the existence of a bounded forced state in an uncertain system. This is because second-moment unstable systems, even if sample stable, support unbounded excursions in variance that prevent establishing a finite forced variance regime.

Optimal perturbations play a major role in GST and for uncertain systems we obtain a method for finding the initial condition leading to greatest expected variance growth at any chosen time as well as the stochastic forcing structures that maintain the greatest variance. Remarkably, one pure initial condition is found constructively that maximizes expected growth in an uncertain system and an analogous pure forcing structure is found that maximizes the expected variance.

One application of this work is a method for taking account of forecast error system uncertainty in choosing forecast ensembles by substituting uncertain optimals for certain optimals in the ensemble. Another application is to the problem of explaining the origin of upper-level short waves in the midlatitude jet, which are dynamically similar to the upper-level structures in the fluctuating jet example (Fig. 8).

Acknowledgments

This work was supported by NSF ATM-0123389 and by ONR N00014-99-1-0018.

REFERENCES

  • Ambaum, M. H. P., B. J. Hoskins, and D. B. Stephenson, 2001: Arctic Oscillation or North Atlantic Oscillation? J. Climate, 14 , 34953507.

    • Search Google Scholar
    • Export Citation
  • Arnold, L., 1992: Stochastic Differential Equations: Theory and Applications. Krieger, 228 pp.

  • Drazin, P. G., and W. H. Reid, 1981: Hydrodynamic Stability. Cambridge University Press, 527 pp.

  • Ehrendorfer, M., and J. J. Tribbia, 1997: Optimal prediction of forecast error covariances through singular vectors. J. Atmos. Sci., 54 , 286313.

    • Search Google Scholar
    • Export Citation
  • Farrell, B. F., and P. J. Ioannou, 1994: A theory for the statistical equilibrium energy and heat flux produced by transient baroclinic waves. J. Atmos. Sci., 51 , 26852698.

    • Search Google Scholar
    • Export Citation
  • Farrell, B. F., 1995: Stochastic dynamics of the midlatitude atmospheric jet. J. Atmos. Sci., 52 , 16421656.

  • Farrell, B. F., 1996: Generalized stability. Part I: Autonomous operators. J. Atmos. Sci., 53 , 20252041.

  • Farrell, B. F., 1998: Perturbation structure and spectra in turbulent channel flow. Theor. Comput. Fluid Dyn., 11 , 215227.

  • Farrell, B. F., 1999: Perturbation growth and structure in time-dependent flows. J. Atmos. Sci., 56 , 36223639.

  • Farrell, B. F., 2000: Perturbation dynamics in atmospheric chemistry. J. Geophys. Res., 105 ((D7),) 93039320.

  • Farrell, B. F., 2002: Perturbation growth and structure in uncertain flows. Part I. J. Atmos. Sci., 59 , 26292646.

  • Horn, R. A., and C. R. Johnson, 1991: Topics in Matrix Analysis. Cambridge University Press, 607 pp.

  • Hughston, L. P., R. Jozsa, and W. K. Wooters, 1993: A complete classification of quantum ensembles having a given density matrix. Phys. Lett., 183A , 1418.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF ensemble prediction system: Methodology and validation. Quart. J. Roy. Meteor. Soc., 122 , 73119.

    • Search Google Scholar
    • Export Citation
  • Palmer, T. N., 2001: A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parameterization in weather and climate prediction models. Quart. J. Roy. Meteor. Soc., 127 , 279304.

    • Search Google Scholar
    • Export Citation
  • Sardeshmukh, P., C. Penland, and M. Newman, 2001: Rossby waves in a stochastically fluctuating medium. Progress in Probability. P. Imkeller and J. S. von Storch, Eds., Vol. 49, Birkhauser Verlag, 369–384.

    • Search Google Scholar
    • Export Citation
  • Schrödinger, E., 1936: Probability relations between separated systems. Proc. Cambridge Philos. Soc., 32 , 446452.

  • Van Huffel, S., and J. Vandewalle, 1991: The Total Least Squares Problem: Computational Aspects and Analysis. SIAM, 300 pp.

  • Van Kampen, N. G., 1992: Stochastic Processes in Physics and Chemistry. Elsevier, 465 pp.

APPENDIX A

Basic Properties of the Kronecker (Tensor) Product

The tensor or Kronecker product of the k × l matrix 𝗔 with the m × n matrix 𝗕 is the km × ln matrix 𝗔 ⊗ 𝗕 defined as follows:
i1520-0469-59-18-2647-ea1
Two properties of the tensor product will be used repeatedly:
i1520-0469-59-18-2647-ea2

APPENDIX B

Construction of the Optimal Covariance

Consider singular value decomposition of the composition of the trace superoperator T with the mapping M: TM = 𝗨ΣV, in which V is a single column. We will show that the first singular vector V is the optimal initial covariance that results in maximum variance after mapping by M. This shows that the optimal is physically realizable and necessarily rank one.

First, we establish that V is hermitian. If V were not hermitian, it could be decomposed into its hermitian and antihermitian parts. Because the antihermitian part has zero trace under the mapping M, a nonhermitian V would produce the same variance as its hermitian part but would have larger Frobenius norm. Therefore, the optimal V is hermitian.

Second, we establish that V must be positive definite. Consider the eigendecomposition of V in the orthonormal basis of its eigenvectors ei ordered descending in the magnitude of their real eigenvalue αi. In vector form this eigendecomposition can be written as
i1520-0469-59-18-2647-eb1
if V were not positive definite, some of the αi would be negative, and υ could then be written as the difference of two positive definite matrices V = V+V, by partitioning the summation over the positive and negative eigenvalues. The action of the linear dynamics does not affect this partitioning, because each positive definite hermitian matrix is mapped to a positive definite hermitian matrix. But then we reach a contradiction: V is the covariance that produces the largest growth; if it were not positive definite, then a higher energy would be attained by either V+ or V (because the diagonal elements of a positive definite matrix are by necessity positive). We conclude that the optimal initial covariance that maximizes the final covariance is by necessity hermitian and positive definite.

Third, we show that the optimal covariance is necessarily a rank-one covariance, that is, a covariance produced by a sure initial condition, and that the optimal is the leading EOF of V. Moreover, the EOFs can be shown to order all pure state covariances according to the variance that each produces in the final state.

That the optimal must be a pure state is clear from an argument based on superposition.B1 A general unit variance realizable initial covariance 𝗖i( ‖ 𝗖i ‖  = 1) can be decomposed into its orthonormal EOFs, fi, in the vector form ci = Σk γkck, where ck, is the rank-one vector covariance associated with each EOF, that is, ck = f*kfk. Because trace(𝗖k) = 1, it follows that Σk γk = 1. The final covariance cf will be cf = Σk γkMck and its total variance will be the linear sum of the corresponding trace (Mck). Out of the covariances formed by the EOFs, let cα = f*αfα be the initial covariance of unit variance that leads to the largest final variance, trace (Mcα); then because Σi γi = 1, a higher growth is obtained by cα = f*αfα rather than 𝗖i itself.

Having established that the optimal initial condition is rank one, consider a general rank one realizable initial covariance produced by the sure state ψi = Σm βmem, where em are the orthonormal EOFs of V, and βm undetermined but such that Σm  | βm | 2 = 1, so that the initial covariance,
i1520-0469-59-18-2647-eb2
that is generated from it has unit trace. Consider the action of TM on ci. Using (B1) and the identity (𝗔 ⊗ 𝗕) = 𝗔 ⊗ 𝗕, the mapping TM can be written as
i1520-0469-59-18-2647-eb3
because the EOF coefficients αk are real (and positive). Note that because of the orthonormality of the eigenvectors, ek, we have for any kl that the vectors e*kel are perpendicular to υ in the Euclidean vector inner product [cf. Eq. (B1)]. We thus obtain
i1520-0469-59-18-2647-eb4
because Σk  | βk | 2 = 1 and for all k, αk > 0, and the maximum growth is attained by choosing βk = 0 for k > 1 and β1 = 1. Therefore, it is attained by the first EOF of V and the variance growth is σα1.

APPENDIX C

The Back Lyapunov Equation for Uncertain Dynamics and Construction of the Optimals

Consider the system
i1520-0469-59-18-2647-ec1
in which the asymptotically stable certain operator 𝗔(t) is stochastically forced by a temporally white scalar noise process ξ(t). The stochastic optimal is the forcing vector F of unit Euclidean norm that produces the greatest mean variance. First, notice that because ψ(t) = t0 𝗣(t, s)Fξ(s) ds where 𝗣(t, s) is the propagator, the variance at time t is
i1520-0469-59-18-2647-ec2
It is apparent from this expression that the eigenvector of the hermitian matrix
i1520-0469-59-18-2647-ec3
with largest eigenvalue is the forcing structure that produces the greatest variance at time t. The other eigenvectors of 𝗦(t) complete the set of mutually orthogonal forcings that can be ordered according to the variance that each produces at time t.
The stochastic optimal is obtained by eigenanalysis of 𝗦 = limt→∞ 𝗦(t). This limit can be easily obtained by noting that 𝗦(t) satisfies the equation
i1520-0469-59-18-2647-ec4
If 𝗔 is autonomous and asymptotically stable, the steady state 𝗦 satisfies the back Lyapunov equation
i1520-0469-59-18-2647-ec5
From this equation, 𝗦 can be easily determined and the stochastic optimals obtained.
The advantage of (C5) is that it is often computationally easier to obtain the optimals of an operator rather than the optimals of a superoperator. We wish to determine the appropriate back Lyapunov equations for an uncertain system. Because each realization satisfies the back Lyapunov equation
i1520-0469-59-18-2647-ec6
we can obtain the ensemble average equation by the same method that was used to obtain the ensemble average covariance in section 3. The resulting steady-state back Lyapunov equation for the noncommuting case is in the short autocorrelation limit:
i1520-0469-59-18-2647-ec7

The steady state 𝗦 in high-dimension systems can be obtained using (C7) through the series procedure described in section 3e. Eigenanalysis of 𝗦 then provides the stochastic optimals. This development shows constructively that the stochastic optimals are rank-one covariances.

Analogously and as an alternative to the procedure used in section 4 for obtaining the optimal initial condition that leads to the greatest expected perturbation growth at any time t, we can proceed in the following manner. At time t the perturbation square amplitude for each realization of the fluctuations is
ψψψ0ttψ0
It is apparent from this expression that the eigenvector of the hermitian matrix
ttt
with largest eigenvalue is the initial condition that leads to the greatest amplitude at time t. The other eigenvectors of 𝗛(t) complete the set of mutually orthogonal initial conditions ordered according to their growth at time t. To obtain the expected optimal perturbation we first form the ensemble average of equation
i1520-0469-59-18-2647-ec10
which is satisfied by each realization. For example, the ensemble mean equation in the white noise approximation is
i1520-0469-59-18-2647-ec11
which determines 〈𝗛〉. Eigenanalysis of 〈𝗛〉 determines the pure optimal initial state that leads to the largest square amplitude growth at time t.

Fig. 1.
Fig. 1.

Schematic evolution of a sure initial condition ψ(0) in an uncertain system. After time t the evolved states ψ(t) lie in the region shown. Initially the covariance matrix 𝗖(0) = ψ(0)ψ(0) is rank one, but at time t the covariance matrix is of rank greater than one. For example, if the final states were ψi(t)(i = 1, … , 4) with equal probability, the covariance at time t[𝗖(t) = 1/4 Σ4i=1 ψi(t)ψi(t)] would be rank four, representing a mixed state. By contrast, in certain systems the rank of the covariance matrix is invariant and a pure state evolves to a pure state

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 2.
Fig. 2.

Second-moment exponent as a function of wavenumber in the Eady model with shear fluctuation amplitude ϵ = 1/3 and autocorrelation times tc = 6 and tc = 1. For comparison the bottom curve is the exponent in the absence of fluctuations. The meridional wavenumber is l = 0.

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 3.
Fig. 3.

Expected optimal energy growth as a function of optimizing time for the uncertain Eady model with velocity fluctuations of amplitude ϵ = 1/3 and autocorrelation times tc = 2 and tc = 6. Shown are optimal growth given by the equivalent white noise propagator and by the exact propagator. For comparison the optimal growth from the mean propagator with no fluctuations is also shown. The fluctuations generally increase perturbation growth and the equivalent white noise approximation overestimates the growth. The fluctuating wind has vertical structure u(z) = z2. The wavenumbers are k = 3, l = 3. The coefficient of linear friction is r = 0.3

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 4.
Fig. 4.

Structure of the optimal perturbation that leads to greatest expected energy at t = 4 in the uncertain Eady model. The amplitude of the fluctuations is ϵ = 1/3 and the autocorrelation time is tc = 6; other parameters are as in Fig. 3: (top) the optimal of the mean operator, which produces energy growth 1.14; (middle) the optimal of the equivalent white noise dynamics, which produces energy growth 3.9; (bottom) the optimal of the exact dynamics, which produces energy growth 1.4

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 5.
Fig. 5.

Structure of the first EOF evolved from the sure optimal initial state shown in Fig. 4: (top) the sure evolved optimal of the mean dynamics; (middle) the first EOF of the covariance obtained using the equivalent white noise dynamics; the first EOF accounts for 96% of the evolved optimal covariance; (bottom) the first EOF of the covariance obtained using the exact dynamics. Note that despite the exaggerated growth factor obtained using the equivalent white noise dynamics, the structure of the evolved covariance is well approximated

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 6.
Fig. 6.

Optimal expected energy growth as a function of wavenumber, k, for the uncertain Eady model with fluctuation amplitude ϵ = 1/3 and autocorrelation time tc = 6. Shown are the optimal growth at t = 4 obtained from the equivalent white noise propagator, the exact propagator, the propagator of Eq. (66), which would have been exact if 𝗔 and 𝗕 commuted, and the optimal growth from the mean propagator. The fluctuating wind has vertical structure u(z) = z2. The wavenumbers are k = 3, l = 3. The coefficient of linear friction is r = 0.3

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 7.
Fig. 7.

(left) Expected energy growth achieved by the optimal perturbations in four time units. Shown are the growth achieved by optimal perturbations obtained using the exact ensemble mean square dynamics (circles), the growth achieved by the optimal perturbations obtained using the equivalent white noise dynamics (crosses), and the growth achieved by the optimals of the mean dynamics in the absence of fluctuations (stars). (right) EOF decomposition of the covariance at t = 4 arising from evolution of the rank-one initial covariance of the top optimal perturbation. Shown are the variance percentage accounted for by the EOFs of the covariance evolved by the exact dynamics (circles), the variance accounted for by the EOFs of the equivalent white noise dynamics (crosses), and the variance accounted for by the EOFs of the mean dynamics (stars). The covariance evolved with the mean dynamics remains rank one and a single EOF accounts for 100% of its variance. The evolved covariance under uncertain dynamics is mixed and spanned by approximately two states. The amplitude of the fluctuations is ϵ = 1/3 and the autocorrelation time is tc = 10; the model and the other parameters are as in Fig. 3

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 8.
Fig. 8.

Structure of the first EOF of maintained variance in the Eady model. (top) The first EOF produced by temporally and spatially white noise additive forcing of the mean operator; the maintained energy is 2.2, and the first EOF accounts for 13.9% of the variance. (middle) The first EOF produced in the equivalent white noise approximation by temporally and spatially white noise forcing of the operator associated wind fluctuating about the mean; the fluctuating wind is u(z) = z2, the rms amplitude of the fluctuations is ϵ = 1/3, and the autocorrelation time is tc = 6; the maintained energy is 2.7, and the first EOF accounts for 26% of the variance. (bottom) The exact first EOF for the operator associated with the same fluctuating wind as in the middle panel but with the assumption that the fluctuations are Gaussian with Kubo number K = 2; the maintained energy is 2.6, and the first EOF accounts for 23.8% of the variance; the wavenumbers are k = 3, l = 3; the coefficient of linear friction is r = 0.3

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 9.
Fig. 9.

(top) Structure of the first stochastic optimal, which is responsible for producing 11.1% of the total variance when the mean operator of the Eady model is stochastically forced with temporally white additive noise with the spatial structure of the stochastic optimal. (bottom) The structure of the first stochastic optimal in the fluctuating Eady model in the equivalent white noise approximation; the fluctuating wind is u(z) = z2, the rms amplitude of the fluctuations is ϵ = 1/3, and the autocorrelation time is tc = 6; this stochastic optimal is responsible for producing 20.6% of the total variance. The wavenumbers are k = 3, l = 3; the coefficient of linear friction is r = 0.3.

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

Fig. 10.
Fig. 10.

(left) Variance maintained by the first 10 stochastic optimals of the uncertain Eady model: the variance maintained by the equivalent white noise approximation (crosses), and for reference the variance maintained by the mean operator without fluctuations (stars). (right) The percentage of the variance of the uncertain Eady model arising from the first 10 EOFs with stochastic forcing white in space and time: variance explained by the equivalent white noise approximation (crosses), variance explained by the mean operator (stars); the fluctuating wind is u(z) = z2, the rms amplitude of the fluctuations is ϵ = 1/3 and the autocorrelation time is tc = 10; the wavenumbers are k = 3, l = 3; the coefficient of linear friction is r = 0.1

Citation: Journal of the Atmospheric Sciences 59, 18; 10.1175/1520-0469(2002)059<2647:PGASIU>2.0.CO;2

1

If 𝗣(t, s) is the propagator of 𝗔(t), then ψ(t) = t0 𝗣(t, s)f(s) ds. The fact that 𝗣(t, t) = 𝗜 and [f(t)* ⊗ f(s)] = δ(ts)q implies that [f * ⊗ ψ] = t0 [f *(t) ⊗ 𝗣(t, s)f(s)] ds = (1/2)q, where q is the vector covariance associated with 𝗤 = [ff]. The factor of 1/2 in the above expression comes from the conventional property of the delta function: t0 δ(ts) ds = 1/2. Similarly, [ψ* ⊗ f] = (1/2)q and (9) and (10) follow.

2

Singular values and vectors of the necessarily hermitian covariance matrices can be obtained by eigenanalysis.

3

Unless the covariance is the identity.

4

This is due to the Eckart–Young–Mirsky property of singular value decomposition (cf. Van Huffel and Vandewalle 1991).

5

For an interesting example of this see Ambaum et al. (2001).

6

Superscripts will be used here to distinguish specific vectors of the base from their components.

7

Because (𝗜 ⊗ 𝗔)(𝗔* ⊗ 𝗜) = 𝗔* ⊗ 𝗔 = (𝗔* ⊗ 𝗜)(𝗜 ⊗ 𝗔).

8

That the positive definite (hermitian) matrices do not form a subspace is clear on noting that negative multiples of a member of the set is not in the set. However, it follows immediately that given two positive definite covariances 𝗖1 and 𝗖2 we can always construct another covariance as a convex linear combination of the two. Indeed 𝗖 = λ𝗖1 + (1 – λ)𝗖2, with 1 ≥ λ ≥ 0, is hermitian, and because for any state ψ, ψ𝗖ψ = λ(ψ𝗖1ψ) + (1 – λ)(ψ𝗖2ψ) > 0, 𝗖 is also positive definite given that 𝗖1,2 are. (A subset of a vector space is said to be convex if the set contains the straight line segment connecting any two points in the set.)

9

That the maximum real part of (50) occurs for i = j = α for some α follows because for any two numbers a and b it can be shown by simple algebraic manipulations that ℜ{a + b* + (c + d*)2} ≤ max{2ℜ(a) + 4ℜ(c)2, 2ℜ(b) + 4ℜ(d)2}. Recall also that the second-moment growth rate is equal to the maximum eigenvalue of (49) divided by two.

B1

There is an alternative way to see that the optimal covariance is by necessity rank one. As discussed earlier the positive definite covariances form a convex subset, that is, any convex combination of 𝗖1 and 𝗖2, 𝗖 = λ𝗖1 + (1 – λ)𝗖2 with 1 ≥ λ ≥ 0, produces a positive definite hermitian covariance. The pure states are the “vertices” of this subset in the sense that they cannot be written as convex combinations of covariances. The measure we have introduced is a linear measure on the covariances and consequently the optimum is attained at a vertex, that is, for a pure state covariance (cf. Farrell and Ioannou 2000).

Save