On the Foundation and Different Interpretations of Ensemble Sensitivity

Le Duc aInstitute of Engineering Innovation, The University of Tokyo, Tokyo, Japan
bMeteorological Research Institute, Tsukuba, Japan

Search for other papers by Le Duc in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-0529-076X
,
Takuya Kawabata bMeteorological Research Institute, Tsukuba, Japan

Search for other papers by Takuya Kawabata in
Current site
Google Scholar
PubMed
Close
, and
Daisuke Hotta bMeteorological Research Institute, Tsukuba, Japan

Search for other papers by Daisuke Hotta in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

In sensitivity analysis, ensemble sensitivity is defined as the regression coefficients resulting from a simple linear regression of changes of a response function on initial perturbations. One of the interpretations for ensemble sensitivity considers this a simplified version of regression-based adjoint sensitivity called univariate ensemble sensitivity whose derivation involves the so-called diagonal approximation. This approximation, which replaces the analysis error covariance matrix by a diagonal matrix with the same diagonal, helps to avoid inversion of the analysis error covariance, but, at the same time causes confusion in understanding and practical application of ensemble sensitivity. However, some authors have challenged such a controversial interpretation by showing that univariate ensemble sensitivity is multivariate in nature, which raises the necessity for the foundation of ensemble sensitivity. In this study, we have tried to resolve the confusion by establishing a robust foundation for ensemble sensitivity without relying on the controversial diagonality assumption. As employed in some studies, we adopt an impact-based definition for ensemble sensitivity by taking into account probability distributions of analysis perturbations. The mathematical results show that standardized ensemble sensitivity carries in itself three important quantities at the same time: 1) standardized changes of the forecast response with one standard deviation changes of individual state variables, 2) correlations between the forecast response and individual state variables, and 3) the most sensitive analysis perturbation. The theory guarantees validity of ensemble sensitivity, demonstrates its multivariate nature, and explains why ensemble sensitivity is effective in practice.

© 2023 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Le Duc, leduc@sogo.t.u-tokyo.ac.jp

Abstract

In sensitivity analysis, ensemble sensitivity is defined as the regression coefficients resulting from a simple linear regression of changes of a response function on initial perturbations. One of the interpretations for ensemble sensitivity considers this a simplified version of regression-based adjoint sensitivity called univariate ensemble sensitivity whose derivation involves the so-called diagonal approximation. This approximation, which replaces the analysis error covariance matrix by a diagonal matrix with the same diagonal, helps to avoid inversion of the analysis error covariance, but, at the same time causes confusion in understanding and practical application of ensemble sensitivity. However, some authors have challenged such a controversial interpretation by showing that univariate ensemble sensitivity is multivariate in nature, which raises the necessity for the foundation of ensemble sensitivity. In this study, we have tried to resolve the confusion by establishing a robust foundation for ensemble sensitivity without relying on the controversial diagonality assumption. As employed in some studies, we adopt an impact-based definition for ensemble sensitivity by taking into account probability distributions of analysis perturbations. The mathematical results show that standardized ensemble sensitivity carries in itself three important quantities at the same time: 1) standardized changes of the forecast response with one standard deviation changes of individual state variables, 2) correlations between the forecast response and individual state variables, and 3) the most sensitive analysis perturbation. The theory guarantees validity of ensemble sensitivity, demonstrates its multivariate nature, and explains why ensemble sensitivity is effective in practice.

© 2023 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Le Duc, leduc@sogo.t.u-tokyo.ac.jp

1. Introduction

In numerical weather prediction (NWP), necessity often arises to evaluate sensitivity of forecasts with respect to changes in initial conditions. A natural measure of such sensitivity is a gradient of some interesting aspects of forecasts with respect to initial conditions. This gradient-based sensitivity is called adjoint sensitivity and can be quantified by running adjoint models associated with numerical models from which forecasts are produced (Errico and Vukicevic 1992; Langland et al. 1995). Since adjoint models are essential in any four-dimensional variational (4DVAR) data assimilation systems, adjoint sensitivity can be estimated relatively easily if we have access to a 4DVAR system by utilizing the driving model and its adjoint code therein.

However, adjoint models are not always available in practice due to their high maintenance costs when NWP models are frequently updated with more sophisticated components. In such cases, adjoint sensitivity can still be estimated from ensembles of analyses and forecasts (Ancell and Hakim 2007; Hakim and Torn 2008), which are always available in any ensemble prediction system. This ensemble-based estimation of adjoint sensitivity can be reinterpreted as taking multiple linear regression coefficients between forecasts and initial conditions as a measure of sensitivity. Regression coefficients in this case are determined by two covariances: analysis error covariances A and cross covariances between forecasts and initial conditions. In practice, these covariances are approximated by sample covariances calculated from analysis and forecast perturbations.

The regression-based estimation of adjoint sensitivity requires inversion of A, which is impractical since A is a very large matrix in NWP. In its particular application to observation targeting, Ancell and Hakim (2007) showed that the inverse of A need not be calculated explicitly. This is because the inverse of A that appears in the regression-based estimation of adjoint sensitivity cancels out with A that comes from the definition of the impact of targeted observations. Along with its derivation, they introduced a new quantity for sensitivity, called ensemble sensitivity, or more precisely, univariate ensemble sensitivity, which helps to simplify the mathematical expression. Ensemble sensitivity shares the same form as the regression-based expression of adjoint sensitivity but with A replaced by a diagonal matrix D, which is derived from A by setting all off-diagonal elements of A to zeros. With this diagonal matrix, ensemble sensitivity can be reinterpreted as simple linear regression coefficients between forecasts and initial conditions. Note that, as demonstrated by Ancell and Hakim (2007), despite its designation as univariate sensitivity, its nature is in fact multivariate in the sense that a univariate ensemble sensitivity of a response function with respect to a single element of the initial state involves adjoint sensitivities to all elements of the initial state and the covariance relationship of this single element with all elements of the initial state.

The use of ensemble sensitivity has expanded beyond its original derivation as a convenient tool or an intermediate step in quantifying the impact of targeted observations and it has later been used independently in many sensitivity studies (Torn and Hakim 2008; Garcies and Homar 2009; Torn 2010; Chang et al. 2013; Hanley et al. 2013; Ito and Wu 2013; Bednarczyk and Ancell 2015; Brown and Hakim 2015; Hill et al. 2016; Yokota et al. 2016). The extensive use of ensemble sensitivity in sensitivity analysis can be attributed to its simplicity in formulation where the otherwise expensive inversion of A is no longer required. However, ensemble sensitivity is not the same as adjoint sensitivity in the context of ensemble sensitivity analysis, and its relevance in the context of sensitivity analysis needs a justification. Due to the resemblance of ensemble sensitivity and simple linear regression coefficients, one of a justification could be given by making use of the so-called diagonal approximation which assumes that A can be approximated by a diagonal matrix with the same diagonal elements as those of A (Hacker and Lei 2015; Limpert and Houston 2018). Under this assumption, adjoint sensitivity reduces to ensemble sensitivity. This justification is clearly inadequate in most situations in NWP where analysis errors at adjacent grid points are known to be highly correlated. Ancell and Hakim (2007) and later Ancell and Coleman (2022) have challenged such a controversial interpretation by showing that ensemble sensitivity is in fact a product of regression coefficients of individual state variables and adjoint sensitivity, thus being multivariate in nature. In other words, the diagonal matrix D is not an approximation, but is simply a part of the definition of ensemble sensitivity. However, this interpretation does not explain the reason underlying the crucial step of replacing A by its diagonal one, leading to the source of confusion in understanding the nature of ensemble sensitivity.

Along this vein, in order to get rid of the controversial diagonality assumption, Hacker and Lei (2015) came back to the understanding of the ensemble-based estimation of adjoint sensitivity as multiple linear regression coefficients. They avoided the inversion of A by reformulating the underlying regression problem as an underdetermined least squares problem, and identified the minimum-norm solution of this problem with ensemble sensitivity. An advantage of this approach is that we only need to invert a small matrix in ensemble space. Ensemble sensitivity resulting from this approach is called multivariate ensemble sensitivity to distinguish it from univariate ensemble sensitivity introduced by Ancell and Hakim (2007).

It is expected that multivariate ensemble sensitivity should be different from univariate ensemble sensitivity since the former does not rely on the diagonal approximation, as claimed and demonstrated in Hacker and Lei (2015) and Ren et al. (2019). Curiously, however, in their numerical experiments, Hacker and Lei (2015) found that changes of forecasts yielded by univariate and multivariate ensemble sensitivity with respect to one standard deviation changes of initial conditions are identical when localization is not applied (see Fig. 1a in their paper). The authors did not give any adequate explanations for such a coincidence. Note that localization is an ad hoc technique to mitigate the impact of sampling errors on sample covariances. Thus, in the limited case when we can access a large number of ensemble members so that localization can be turned off, those results imply an interesting dilemma: despite the difference in foundations of univariate and multivariate ensemble sensitivity, they have the same effect on changes of forecasts. In other words, we return to multivariate ensemble sensitivity to avoid the diagonality assumption, and finally we end up with univariate ensemble sensitivity again. How can we explain the effective work of univariate ensemble sensitivity given its controversial diagonality assumption?

In this paper, we show that it is not accidental that univariate and multivariate ensemble sensitivity yield the same changes of forecasts. Underlying this dilemma is a theory that establishes a robust foundation for ensemble sensitivity. In short, ensemble sensitivity is equivalent to an effective adjoint sensitivity that yields the same forecast response as adjoint sensitivity without a need of calculating perturbations at correlated points. This is justified by the proof that the same forecast response is obtained for any analysis error covariances. By showing that ensemble sensitivity is a rigorous concept, we can resolve confusion regarding the use of ensemble sensitivity in practice. Furthermore, we prove that ensemble sensitivities to all elements of the initial state form the most sensitive initial perturbation which maximizes changes of forecasts among all initial perturbations with the same magnitudes—a fact that, to the best knowledge of the authors, has been hitherto unrecognized in the literature.

This paper is organized as follows. Section 2 prepares a mathematical background for deploying a robust foundation of ensemble sensitivity. In section 3, we revisit the formerly proposed definition of sensitivity as regression coefficients and seek a new definition that also takes into account probability distributions of initial perturbations. Section 4 shows how ensemble sensitivity follows naturally from the established framework. Another important role of the proposed ensemble sensitivity is examined in section 5 in which ensemble sensitivity is proved to form the most sensitive analysis perturbation. Finally, section 6 summarizes the main findings of this study.

2. Background on ensemble sensitivity

Given a model M that propagates model states x0 at t = 0 to xt = M[x0] at the time t, and a scalar forecast metric J(xt) as a function of xt, the forecast response function with respect to x0 is given by the composite function J(M[x0]). When x0 varies from x0 to x0 + Δx0, changes in the forecast response ΔJ can be estimated from the first-order Taylor approximation of J(M[x0 + Δx0]):
ΔJ=[Jxt][Mx0]Δx0=[Jx0]Δx0,
where [J/xt] is the Jacobian of J(xt), which is a row vector, [M/x0] is the Jacobian of M[x0], which is called the tangent linear model of M, and Δx0 denote analysis perturbations. It is implicitly assumed that these Jacobians are evaluated at their corresponding reference points or trajectories. The elements of the column vector [J/x0]T in (1) are called adjoint sensitivity, and can be obtained from the sensitivity at the time t [J/xt]T by running the adjoint model [M/x0]T backward in time:
[Jx0]T=[Mx0]T[Jxt]T.
If we right-multiply Δx0T on both sides of (1) and then take expectation, we obtain
E[ΔJΔx0T]=[Jx0]E[Δx0Δx0T]=[Jx0]A,
where E[⋅] denotes the expectation operator, and A=E[Δx0Δx0T] is the analysis error covariance. From (3), it is easy to derive an expression for the adjoint sensitivity [J/x0]T without using the adjoint model:
[Jx0]T=E[Δx0Δx0T]1E[Δx0ΔJ]=A1cov(Δx0,ΔJ),
where cov(⋅)denotes the covariance operator. If we put all analysis perturbations into the column vectors of an n × K matrix X, where n and K denote, respectively, the size of Δx0 and the number of ensemble members, and collect the corresponding response perturbations into an 1 × K matrix Y, [J/x0]T can be estimated by approximating the covariances in (4) with their sample covariances:
[Jx0]T=(XXT)1XYT.
Thus, (5) shows that adjoint sensitivity can be viewed as regression coefficients obtained when we perform a multiple linear regression of ΔJ on Δx0.

Due to a limited number of ensemble members, XXT is usually a low-rank matrix and consequently its inverse does not exist in practice. Ancell and Hakim (2007) proposed to circumvent these issues by replacing the sample covariance XXT in (5) by a diagonal matrix D whose diagonal is the diagonal of XXT, i.e., the analysis sample variances. The sensitivity that results from this procedure was termed ensemble sensitivity by Ancell and Hakim (2007). By replacing XXT by D, the matrix to be inverted in (5) becomes full rank and the inversion operation is now trivial. Ensemble sensitivity has been shown to be fundamentally different from adjoint sensitivity. However, its nature is usually justified by the so-called diagonal approximation (that A can be assumed diagonal), which obscures interpretation of the ensemble sensitivity thus defined. We will discuss this issue in more detail in section 4.

An alternative, presumably less ad hoc way to alleviate this issue is to introduce the Tikhonov regularization into our regression problem whereby turning it into a ridge regression problem in which the regression coefficients are given by
[Jx0]T=(XXT+εI)1XYT,
where we have introduced a small parameter ε called the ridge parameter. Letting ε go to zero, we have
[Jx0]T=(XXT)XYT=X(XTX)YT,
where the dagger symbol () denotes the Moore–Penrose inverse, and the identity (XXT)X = X(XTX) is a property of the Moore–Penrose operator.

Thus, (7) shows that we can calculate adjoint sensitivity by using either the primal form (XXT)XYT or the dual form X(XTX)YT. The advantage of the dual form is that we have only invert the small K × K matrix XTX, which is much easier than to invert the very large n × n matrix XXT in the primary form. However, the side effect is that localization is difficult to introduce into the dual form. Hacker and Lei (2015) proposed this form in estimating adjoint sensitivity using ensembles of analyses and forecasts. It is clear from (7) that the solution obtained in Hacker and Lei (2015) is indeed the regression coefficients represented in their dual form. Therefore, the ensemble-based adjoint sensitivity that they examined is identical to the ensemble-based adjoint sensitivity considered in Ancell and Hakim (2007). In the following sections, we mainly work with the primal form in mathematical treatment.

3. Standardized ensemble sensitivity

Without using the diagonality assumption due to its potential confusion, we will define ensemble sensitivity through the forecast response considering both linear transformations [J/x0] and perturbations Δx0. In contrast, the regression-based version of adjoint sensitivity given in (7) identifies sensitivity only with linear transformations between Δx0 and ΔJ. Thus, while the former approaches sensitivity from the viewpoint of predictability, the latter approaches sensitivity from the viewpoint of dynamical systems. This former approach underlines the important role of the distribution of initial perturbations besides linear transformations since Δx0 are not arbitrary and have to follow a certain probability distribution which in our problem is assumed to be a multivariate normal distribution with the mean 0 and the covariance A.

Without considering the effect of the probability distribution of Δx0 on ΔJ, useful information may be missed with the only use of [J/x0] as a measure of sensitivity. To illustrate the above problem, we assume that A is a diagonal matrix for simplicity. With this assumption, the adjoint sensitivity of J with respect to the ith element of x0 can be calculated explicitly as
sai=[Jx0]iT=cov(Δx0i,ΔJ)var(Δx0i)=cor(Δx0i,ΔJ)×std(ΔJ)std(Δx0i),
where Δx0i is the analysis perturbation of the ith element, std(⋅), var(⋅), and cor(⋅) denote the standard deviation operator, the variance operator, and the correlation operator, respectively. Clearly, from (8) the magnitude of an adjoint sensitivity sai is determined by two factors cor(Δx0i, ΔJ) and std(Δx0i). With only knowing sai, we have no information on the values of cor(Δx0i, ΔJ) and std(Δx0i). Thus, a large sai can be resulted from an either small or large correlation cor(Δx0i, ΔJ) depending on the prescribed magnitude of Δx0i. In case of a small correlation, we come to an interesting situation: although ΔJ is not sensitive to Δx0i, prescribed perturbations still cause large changes of ΔJ. Similarly, an element with a strong correlation with ΔJ, i.e., |cor(Δx0i, ΔJ)| ≈ 1, can be associated with a small sai if std(Δx0i) is large enough. In this case, we come to another interesting situation: although ΔJ is determined by Δx0i, prescribed perturbations only cause small changes of ΔJ. To deal with such potential cases, Garcies and Homar (2009) chose to exclude elements with too small correlations in their sensitivity analysis by setting a lower bound for cor(Δx0i, ΔJ), while Limpert and Houston (2018) proposed to use cor(Δx0i, ΔJ) as a measure of sensitivity in place of sai.
The use of cor(Δx0i, ΔJ) as a measure of sensitivity is indeed the regression-based definition of sensitivity in sensitivity analysis (Saltelli et al. 2008). According to this definition, the sensitivity of ΔJ with respect to the variable Δx0i is defined as the corresponding standardized regression coefficient when we perform a multiple linear regression between ΔJ and Δx0:
si=[Jx0]iT×std(Δx0i)std(ΔJ)=cor(Δx0i,ΔJ).
In other words, we make [J/x0]iT dimensionless by normalizing ΔJ and Δx0i with their standard deviations, which also accounts for the typical magnitude of Δx0i on ΔJ. It is worth emphasizing here that the definition (9) also assumes that A has a diagonal form, i.e., all Δx0i are independent.
To extend the definition (9) to general cases when A can be any symmetric positive definite matrices, we first reinterpret the sensitivity in (9) as changes of the forecast response normalized by its standard deviation when individual analysis perturbations Δx0i are equal to their standard deviations std(Δx0i):
si=1std(ΔJ)ΔJ[std(Δx0i)]=1std(ΔJ)cov(Δx0i,ΔJ)var(Δx0i)std(Δx0i).
In fact, this view on sensitivity was followed in some studies (Torn 2010; Hacker and Lei 2015; Ren et al. 2019). Then, we define standardized ensemble sensitivity si using a similar formula as (10):
si=1std(ΔJ)ΔJ(Δx0i)=1std(ΔJ)Δx0iT[Jx0]T,
where Δx0i is the analysis perturbation in which the ith element Δx0i is equal to std(Δx0i), and the adjoint sensitivity [J/x0]T are plugged in (11) using the transpose of (1). Note that there is a subtle difference between (10) and (11). Whereas the bold symbol Δx0i in (11) is a vector, the italic symbol Δx0i in (10) is a scalar. It is easy to verify that (10) is a special case of (11) by setting Δx0i = [0, …, 0, std(Δx0i), 0, …, 0]T. A difficulty in general cases is that when Δx0i = std(Δx0i), the other elements Δx0j(ji) are not necessarily zero due to the correlations imposed by A. This means that unlike the diagonal cases, Δx0i cannot be taken as [0, …, 0, std(Δx0i), 0, …, 0]T in general.

4. A unified theory for ensemble sensitivity

There exist many probability distributions of Δx0 that have the mean 0 and the covariance A. To find a closed form for Δx0i in (11), we assume that Δx0 follow a multivariate normal distribution N(0,A). Our problem is how we determine Δx0j(ji) given Δx0i = std(Δx0i). Ren et al. (2019) proposed that Δx0j can be estimated from Δx0i by performing simple linear regressions of Δx0j on Δx0i. Here, we show a more systematic way to assign appropriate values to Δx0j(ji) whence Δx0i is known. At first, note that we take Δx0i = [0, …, 0, std(Δx0i), 0, …, 0]T when A is diagonal, i.e., all Δx0j(ji) are set to zero. This shows that in the absence of correlations between Δx0j and Δx0i, we take the expectations of Δx0j, which are zero, as their typical values. Therefore, this suggests that in the presence of correlations between Δx0j and Δx0i, the conditional expectations of Δx0j, i.e., Ex0jx0i = std(Δx0i)], should be taken as their typical values.

More precisely, denoting by Δx0I the vector consisting of Δx0j(ji) excluding Δx0i, our task is to calculate the expectation of the conditional probability distribution px0Ix0i = std(Δx0i)]. It is well-known that if two variables are jointly Gaussian, the conditional distribution of one variable conditioned on the other is also Gaussian (see chapter 2 in Bishop 2006 for the proof). Since the joint distribution of Δx0I and Δx0i is the Gaussian distribution N(0,A), the conditional expectation of Δx0I has the following form:
E[Δx0I|Δx0i]=E[Δx0I]+cov(Δx0I,Δx0i)var1(Δx0i)(Δx0iE[Δx0i])=cov(Δx0I,Δx0i)var1(Δx0i)Δx0i.
Plugging Δx0i = std(Δx0i) into (12), we have
E[Δx0I|Δx0i]=cov(Δx0I,Δx0i)var1(Δx0i)std(Δx0i)=ΣIcor(Δx0I,Δx0i),
where ΣI is the diagonal matrix with std(Δx0j)(ji) along its diagonal. Finally, by combining (13) with Δx0i = std(Δx0i), we obtain
Δx0i=Σcor(Δx0,Δx0i).
Here Σ is the diagonal matrix containing the standard deviations of the elements of Δx0 along its diagonal.
With the adjoint sensitivity [J/x0]T given in (4) and Δx0i given in (14), it is easy to calculate si in (11):
si=1std(ΔJ)Δx0iT[Jx0]T=1std(ΔJ)cor(Δx0i,Δx0)ΣA1cov(Δx0,ΔJ).
By noticing that cor(Δx0i, Δx0) is the ith row vector of the correlation matrix C = cor(Δx0, Δx0), (15) can be rewritten in a more compact form by stacking all si into a vector s:
s=1std(ΔJ)CΣA1cov(Δx0,ΔJ)=CΣA1Σcor(Δx0,ΔJ)=cor(Δx0,ΔJ).
To obtain the final result, we have used two identities A = ΣCΣ and cov(Δx0, ΔJ) = Σcor(Δx0, ΔJ)std(ΔJ). Equation (16) shows that in general cases we get the same result si = cor(Δx0i, ΔJ) as in the diagonal case (9).
Given the above fact that s is the same for any symmetric positive-definite matrix A, we are now ready to clarify the scientific meaning of ensemble sensitivity. First, we define the effective adjoint sensitivity [J/x0]eT as the linear transformation when applied to the univariate perturbations [0, …, 0, std(Δx0i), 0, …, 0]T yields the same forecast response as adjoint sensitivity when applied to the multivariate perturbations (14):
ΔJ(Δx0i)=[Jx0]Δx0i=[Jx0]e[0std(Δx0i)0]=[Jx0]eistd(Δx0i),
where [J/x0]ei is the ith element of [J/x0]eT. In other words, we absorb all correlations from other elements into [J/x0]eT so that calculation of forecast responses is similar to the same calculation in univariate cases. The left-hand side of (17) has already calculated in (15), therefore, we have
sistd(ΔJ)=[Jx0]Δx0i=[Jx0]eistd(Δx0i),
leading to
[Jx0]eT=std(ΔJ)Σ1s=D1cov(ΔX0,ΔJ),
where we have used (16) to get the final result. This effective adjoint sensitivity is exactly the ensemble sensitivity introduced by Ancell and Hakim (2007), however, with a clear meaning now: this sensitivity effectively enables us to calculate forecast responses by any individual perturbations without considering correlation effects from adjacent points since such the correlations are already accounted for by ensemble sensitivity. Note that the form (19) only holds when analysis perturbations follow a multivariate normal distribution.
We now explore the relations between three sensitivities [J/x0]T, [J/x0]eT, and s. These relations are easy to grasp if we normalize x0 using the mean analysis xa and Σ:
x˜0=Σ1(x0xa),
to transform [J/x0]T, [J/x0]eT to normalized ones [J/x˜0]T=Σ[J/x0]T, [J/x˜0]eT=Σ[J/x0]eT. From (4) and (16), it is easy to deduce the following identity:
[Jx˜0]eT=std(ΔJ)s=Σ1A[Jx0]T=C[Jx˜0]T.
Equation (21) clearly reveals the multivariate nature of the ensemble sensitivity that was hinted by Ancell and Hakim (2007): the normalized adjoint sensitivity is mapped to the normalized ensemble sensitivity through the correlation matrix C. Thus, while [J/x˜0]T describes significant sensitivity of ΔJ with respect to some elements Δx˜0i reflecting the model dynamics only, C spreads out such information over all elements of x˜0 to account for correlations between them in a multivariate way.

As we have seen above, the ensemble sensitivity follows naturally from the definition of standardized ensemble sensitivity (11) and the multivariate normal distribution of Δx0. The controversial diagonality assumption is irrelevant in construction of ensemble sensitivity. This diagonality assumption has been the source of confusion in understanding and applying ensemble sensitivity. First, given the inadequacy of neglecting spatial and interelement correlation in analysis perturbations, ensemble sensitivity has often been considered as a rough approximation of adjoint sensitivity, and the validity of its use has been only justified by its simplicity. Second, since the diagonality assumption is equivalent to assuming that all elements Δx0i are independent, ensemble sensitivity has often been regarded to be univariate in nature, which has led to a common conception that sensitivity as represented by ensemble sensitivity is overestimated because significant contributions from other elements are ignored. The mathematical arguments in this section point out that ensemble sensitivity is multivariate in nature and is not just an approximation to adjoint sensitivity but rather is a distinct measure of sensitivity with a robust mathematical foundation.

In practice, both vectors s (standardized ensemble sensitivity) and std(ΔJ)s (normalized ensemble sensitivity) can be used as a measure of sensitivity. As shown by (16) si are simply correlations between the forecast response and initial conditions. As a result, it is relatively easy to estimate si and localization can even be imposed on s. Furthermore, because si are dimensionless unlike std(ΔJ)si, which are dimensioned, they have an advantage of allowing us to compare sensitivities of different forecast metrics. However, since std(ΔJ)si give changes of the forecast response when Δx0i = std(Δx0i), if forecast metrics are of the same type so that all std(ΔJ)si have the same unit, e.g., rainfall at different areas, it is better to use std(ΔJ)si. This is because two forecast metrics with the same correlations si = cor(Δx0i, ΔJ) will show different responses if their standard deviations std(ΔJ) are different.

Equation (16) points out a surprising fact: standardized ensemble sensitivities are simply cross correlation coefficients cor(Δx0, ΔJ) regardless of the form of A. This clearly explains why Hacker and Lei (2015) observed similar estimates of changes in forecast responses with univariate and multivariate ensemble sensitivity when localization was not applied in their numerical experiments. Note that the final Δx0i in their studies can be shown to take the same form as Δx0i in (14), although they performed simple linear regressions of Δx0j on Δx0i to estimate Δx0j. In contrast, when localization was applied, they observed significant differences in the estimates of ΔJ between univariate and multivariate ensemble sensitivity. In our mathematical arguments that led to (16), we relied on the true covariances and not their approximations, i.e., the sample covariances. Consequently, even if localization is applied, the estimated changes of forecasts should be similar between multivariate and univariate ensemble sensitivity as long as localization reduces differences between the sample and true covariances. Therefore, the differences of ΔJ that they observed are likely artifacts of the way localization was applied in their study. These authors relied on the dual form Y(XTX)XT to estimate adjoint sensitivity, and it is difficult to introduce localization into this form as discussed in section 2. As a result, they could only apply localization to analysis perturbations Δx0i. This inconsistency in calculating the regression coefficients and analysis perturbations might result in these apparent differences.

5. Ensemble sensitivity as the most sensitive perturbation

In this section, we show the relevance of ensemble sensitivity in another important problem related to the sensitivity problem. Instead of estimating how each state element will change the forecast response, we now examine which analysis perturbation among all possible ones with the same magnitudes will yield the largest change in the forecast response. Due to collaborative effect among different elements, this most sensitive analysis perturbation is not expected to be any individual analysis perturbations. As we shall see, intriguingly, ensemble sensitivity is closely involved in this optimization problem.

Our optimization problem is to find the analysis perturbation Δx0 that maximizes the norm of ΔJ subject to the constraint ‖Δx0‖ = ρ where ρ is an arbitrarily positive number. Since ΔJ is a scalar, its norm is its absolute value, and can be calculated from (1) and (4):
|ΔJ|=|[Jx0]Δx0|=|covT(Δx0,ΔJ)A1Δx0|.
Using the two identities A = ΣCΣ and cov(Δx0, ΔJ) = std(ΔJ) Σs where s = cor(Δx0, ΔJ), we rewrite |ΔJ| as a function of normalized analysis perturbations Δx˜0=Σ1Δx0:
|ΔJ|=std(ΔJ)|sTC1Δx˜0|=std(ΔJ)|s,Δx˜0C1|,
where the notation C1 denotes the inner product induced by C1: x,yC1=xTC1y. Since Δx0 are drawn from the Gaussian distribution N(0,A), it is natural to use the following Mahalanobis norm for Δx0:
Δx0A1=Δx0TA1Δx0=ρ,
which can be rewritten as
Δx0A1=Δx˜0TC1Δx˜0=Δx˜0,Δx˜0C1=Δx˜0C1=ρ,
where the notation C1 denotes the Mahalanobis norm associated with the matrix C1.
Maximizing |ΔJ| with respect to Δx0 becomes easy with the help from (23) and (25). Applying the Cauchy–Schwarz inequality to (23) we have
|s,Δx˜0C1|sC1×Δx˜0C1ρsC1.
Therefore, |s,Δx˜0C1| is maximized when Δx˜0 and s are colinear:
Δx˜0*=λs,
where λ is an undetermined coefficient, and can be deduced by plugging (27) into (25):
Δx˜0*C1=Δx˜0*,Δx˜0*C1=|λ|×sC1=ρλ=ρ/sC1.
Substituting (27) and (28) into (23), we obtain the maximum value of |ΔJ|:
|ΔJ|max=ρstd(ΔJ)sC1,
at the following normalized analysis perturbations:
Δx˜0*=ρs/sC1.
If we choose ρ=sC1, i.e., considering all normalized analysis perturbations with the same norms as s, the maximum points are exactly ∓s. In this case, the most sensitive analysis perturbations are
Δx0*=Σs.
In other words, among all normalized analysis perturbations with the same magnitudes, changes of the forecast response are maximized along the direction of the vector consisting of standardized ensemble sensitivities. Thus, while each standardized ensemble sensitivity indicates sensitivity of forecasts to individual state variables, as a whole they form a vector along which we see the maximum change of forecasts.

It is natural to continue with a question on the second most sensitive perturbation by taking the optimization problem further. Thus, we will find the analysis perturbation Δx0 that maximizes the norm of ΔJ subject to the two constraints Δx0A1=ρ and Σs,Δx0A1=0. Compared to the previous optimization problem, an additional constraint has been introduced which limits Δx0 to the orthogonal complement of the first most sensitive perturbation Σs where orthogonality is defined with respect to the inner product A1. Noticing (22) and (23), this constraint can also be rewritten as s,Δx˜0C1=0. However, (23) has already shown that ΔJ=std(ΔJ)s,Δx˜0C1. This entails that all vectors in the orthogonal complement of s yield ΔJ = 0, i.e., they have no impact on the forecast response. In other words, only vectors in the one-dimensional subspace spanned by Σs can effectively cause changes in the forecast metric.

This has an important implication on the question of what the initial perturbation Δx0 can lead to a given change in the forecast metric. Supposing that ΔJ = α × std(ΔJ) with a given scalar α, we find Δx0 under the form Δx0=ΣΔx˜0=Σ(βs+t), where t is a vector in the orthogonal complement of s and orthogonality is defined with respect to the inner product C1. It is easy to deduce that
α×std(ΔJ)=ΔJ=std(ΔJ)s,βs+tC1=β×std(ΔJ)sC12β=α/sC12.
To answer the same question, Torn and Hakim (2009) considered each scalar perturbation Δx0i as a function of ΔJ, and regressed Δx0i on ΔJ:
Δx0i=cov(Δx0i,ΔJ)var(ΔJ)ΔJ=cor(Δx0i,ΔJ)×std(Δx0i)std(ΔJ)ΔJ.
With ΔJ = α × std(ΔJ), we obtain Δx0i = α × cor(Δx0i, ΔJ) × std(Δx0i), which can be rewritten in the vector form as Δx0 = αΣs. This is different from the solution that we derived above Δx0=α/sC12Σs by a factor sC12. How can we explain the difference between the two solutions?
Although Torn and Hakim (2009) wrote a simple linear regression for each state element Δx0i, they indeed performed a multivariate linear regression using the following model:
Δx0=ΔJw+ε,
where w is a vector containing the regression coefficients, and ε denotes a Gaussian model error. The regression coefficients do not depend on the model error covariance associated with ε and have the following form [see chapter 3 in Bishop (2006)]:
w=1var(ΔJ)cov(Δx0,ΔJ)=1std(ΔJ)Σs,
which gives the identical regression coefficients as (33). We check the validity of the model (34) using the fact that all initial perturbations Δx0=α/sC12Σs+Σt, where t is any vector in the orthogonal complement of s cause the same change ΔJ = α × std(ΔJ), which we have proved in (32). Taking the conditional expectation of these Δx0 given this ΔJ and noticing (34), we have
E[Δx0|ΔJ=α×std(ΔJ)]=α/sC12Σs=α×std(ΔJ)ww=1std(ΔJ)×sC12Σs.
This means that w in (34) cannot be taken as a free vector, which is later estimated from the ensemble dataset of Δx0 and ΔJ as in (35). Instead, w is fixed with the value given in (36). The model (34) therefore should be rewritten as follows:
Δx0=ΔJ1std(ΔJ)×sC12Σs+Σt.
Then the correct solution x0=α/sC12Σs when ΔJ = α × std(ΔJ) follows naturally from this model.

6. Discussion and conclusions

In ensemble sensitivity analysis, adjoint sensitivity is estimated from the regression coefficients that are obtained by regressing changes in a forecast response on initial perturbations. Ancell and Hakim (2007) pointed out the existence of a new kind of sensitivity in the ensemble context that they called ensemble sensitivity, and ensemble sensitivity has been proved to be a very useful sensitivity measure in practice. While being proved successful, ensemble sensitivity leaves some confusion in its interpretation because its derivation involves an ad hoc operation that replaces the analysis error covariance A by a diagonal matrix D whose diagonal elements are taken from those of A. This corresponds to making a so-called diagonal approximation which assumes that all elements of the initial perturbation are uncorrelated although the authors of ensemble sensitivity have challenged such a controversial interpretation by showing the multivariate nature of ensemble sensitivity. A few studies have tried to clear up this confusion by eliminating reliance on the diagonal approximation, leading to yet another measure of sensitivity called multivariate ensemble sensitivity, while leaving the old one with the name univariate ensemble sensitivity. Intriguingly, the two variants of ensemble sensitivity are found to give equivalent results when given with an initial perturbation each element of which has the magnitude of one standard deviation. The purpose of this study is to establish a mathematically rigorous framework that resolves the following three questions:

  1. why is univariate ensemble sensitivity so effective in practice despite its (apparent) reliance on the diagonal approximation?

  2. why multivariate and univariate ensemble sensitivity result in equivalent sensitivity?

  3. what are potential applications of ensemble sensitivity to optimal initial conditions that affect the forecast metric?

The key to resolving the three questions above is to distinguish two kinds of sensitivity: (i) the gradient [J/x0] of a forecast response J(xt) with respect to an initial perturbation Δx0, and (ii) a linear estimation of the change ΔJ in the forecast response given some typical values of an initial perturbation Δx0. The sensitivity in the sense of (i) above provides the steepest ascent direction of the forecast response and is suitable when we approach the sensitivity problem from a dynamical systems perspective where our primary concern is the structure of the most sensitive initial perturbations. This approach, however, is not suitable when we approach the sensitivity problem from a statistical perspective: we know that elements of the initial perturbation are correlated and thus the initial perturbation Δx0 cannot take an arbitrary shape, and this fact is totally ignored by the sensitivity (i). Comparison between sensitivities with respect to different meteorological variables is also inconvenient with the sensitivity (i) because the gradient is dimensioned, which means that sensitivities with respect to different variables have different physical dimensions, which are incomparable. Furthermore, and more importantly, the sensitivity (i) may show high sensitivity if the variables of interest have small standard deviations when in fact, they are weakly correlated with the forecast response, and vice versa.

To overcome these deficits and inconveniences, we approach ensemble sensitivity in the sense of (ii) above that accounts for contributions to ΔJ not only from the gradient of forecast response but also from the initial perturbation Δx0. We have developed a mathematical background to precisely define such a sensitivity measure, which we term standardized ensemble sensitivity, and showed that it leads to the formerly proposed ensemble sensitivity without making any ad hoc assumption like the diagonal approximation, making it a sound and appealing choice as a sensitivity measure. Thus, the mathematical results provide rigorous explanations to the three questions raised above. First, it is found that the same change of the forecast response is obtained regardless of whether A is diagonal or not, which shows the equivalence of univariate and multivariate ensemble sensitivities. Second, the simple form of univariate ensemble sensitivity is a result of the fact that A1 in the regression coefficients cancels out A in the individual analysis perturbations, and we misinterpret this fact as the effect of the diagonal approximation. Finally, as an interesting by-product, the theory reveals that the standardized sensitivity vector coincides with the most sensitive analysis perturbation in the sense of maximizing changes in the forecast response under the constraint of having the same norm.

It is worth emphasizing that the theory points out standardized ensemble sensitivity as an important and rigorous concept rather than ensemble sensitivity. This is because the vector containing standardized ensemble sensitivities carries in itself three important quantities at the same time: 1) standardized changes of the forecast response with one standard deviation changes of individual state variables; 2) correlations between the forecast response and individual state variables; and 3) the most sensitive analysis perturbation, which, for a quadratic response function, coincides with the leading ensemble singular vector. These different theoretical aspects provide a comprehensive picture of ensemble sensitivity.

The theoretical framework developed in this study clarifies the difference between the conventional, gradient-based sensitivity measure and the newly proposed impact-based sensitivity measure. While the former approaches the sensitivity problem from a dynamical systems perspective, the latter is better suited for assessment of sensitivity from a probabilistic perspective of predictability and causality. Future practitioners of ensemble sensitivity analysis are advised to clarify which aspect of sensitivity they intend to investigate before initiating analysis. We hope the theoretical elucidation given in this manuscript will be helpful in this regard.

Acknowledgments.

This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) through the Program for Promoting Researches on the Supercomputer Fugaku JPMXP1020351142 “Large Ensemble Atmospheric and Environmental Prediction for Disaster Prevention and Mitigation” (hp200128, hp210166, hp220167), Foundation of River and basin Integrated Communications (FRICS), and JST Moonshot R&D project (Grant JPMJMS2281). Constructive comments from Dr. Brian Ancell have allowed us to significantly improve the manuscript by correctly reframing the discussion. Comments from two other anonymous reviewers are also acknowledged.

Data availability statement.

No datasets were generated or analyzed during the current study.

REFERENCES

  • Ancell, B. C., and G. J. Hakim, 2007: Comparing adjoint- and ensemble-sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 41174134, https://doi.org/10.1175/2007MWR1904.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., and A. A. Coleman, 2022: New perspectives on ensemble sensitivity analysis with applications to a climatology of severe convection. Bull. Amer. Meteor. Soc., 103, E511E530, https://doi.org/10.1175/BAMS-D-20-0321.1.

    • Search Google Scholar
    • Export Citation
  • Bednarczyk, C. N., and B. C. Ancell, 2015: Ensemble sensitivity analysis applied to a southern plains convective event. Mon. Wea. Rev., 143, 230249, https://doi.org/10.1175/MWR-D-13-00321.1.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. M., 2006: Pattern Recognition and Machine Learning. Springer, 738 pp.

  • Brown, B. R., and G. J. Hakim, 2015: Sensitivity of intensifying Atlantic hurricanes to vortex structure. Quart. J. Roy. Meteor. Soc., 141, 25382551, https://doi.org/10.1002/qj.2540.

    • Search Google Scholar
    • Export Citation
  • Chang, E. K. M., M. Zheng, and K. Raeder, 2013: Medium-range ensemble sensitivity analysis of two extreme Pacific extratropical cyclones. Mon. Wea. Rev., 141, 211231, https://doi.org/10.1175/MWR-D-11-00304.1.

    • Search Google Scholar
    • Export Citation
  • Errico, R. M., and T. Vukicevic, 1992: Sensitivity analysis using an adjoint of the PSU–NCAR mesoscale model. Mon. Wea. Rev., 120, 16441660, https://doi.org/10.1175/1520-0493(1992)120<1644:SAUAAO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Garcies, L., and V. Homar, 2009: Ensemble sensitivities of the real atmosphere: Application to Mediterranean intense cyclones. Tellus, 61A, 394406, https://doi.org/10.1111/j.1600-0870.2009.00392.x.

    • Search Google Scholar
    • Export Citation
  • Hacker, J. P., and L. Lei, 2015: Multivariate ensemble sensitivity with localization. Mon. Wea. Rev., 143, 20132027, https://doi.org/10.1175/MWR-D-14-00309.1.

    • Search Google Scholar
    • Export Citation
  • Hakim, G. J., and R. D. Torn, 2008: Ensemble synoptic analysis. Synoptic-Dynamic Meteorology and Weather Analysis and Forecasting: A Tribute to Fred Sanders, Meteor. Monogr., No. 33, Amer. Meteor. Soc., 147–162, https://doi.org/10.1175/0065-9401-33.55.147.

  • Hanley, K. E., D. J. Kirshbaum, N. M. Roberts, and G. Leoncini, 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112133, https://doi.org/10.1175/MWR-D-12-00013.1.

    • Search Google Scholar
    • Export Citation
  • Hill, A. J., C. C. Weiss, and B. C. Ancell, 2016: Ensemble sensitivity analysis for mesoscale forecasts of dryline convection initiation. Mon. Wea. Rev., 144, 41614182, https://doi.org/10.1175/MWR-D-15-0338.1.

    • Search Google Scholar
    • Export Citation
  • Ito, K., and C.-C. Wu, 2013: Typhoon-position-oriented sensitivity analysis. Part I: Theory and verification. J. Atmos. Sci., 70, 25252546, https://doi.org/10.1175/JAS-D-12-0301.1.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., R. L. Elsberry, and R. M. Errico, 1995: Evaluation of physical processes in an idealized extratropical cyclone using adjoint sensitivity. Quart. J. Roy. Meteor. Soc., 121, 13491386, https://doi.org/10.1002/qj.49712152608.

    • Search Google Scholar
    • Export Citation
  • Limpert, G. L., and A. L. Houston, 2018: Ensemble sensitivity analysis for targeted observations of supercell thunderstorms. Mon. Wea. Rev., 146, 17051721, https://doi.org/10.1175/MWR-D-17-0029.1.

    • Search Google Scholar
    • Export Citation
  • Ren, S., L. Lei, Z.-M. Tan, and Y. Zhang, 2019: Multivariate ensemble sensitivity analysis for Super Typhoon Haiyan (2013). Mon. Wea. Rev., 147, 34673480, https://doi.org/10.1175/MWR-D-19-0074.1.

    • Search Google Scholar
    • Export Citation
  • Saltelli, A., M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, and S. Tarantola, 2008: Global Sensitivity Analysis: The Primer. John Wiley and Sons, 304 pp.

  • Torn, R. D., 2010: Ensemble-based sensitivity analysis applied to African easterly waves. Wea. Forecasting, 25, 6178, https://doi.org/10.1175/2009WAF2222255.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2008: Ensemble-based sensitivity analysis. Mon. Wea. Rev., 136, 663677, https://doi.org/10.1175/2007MWR2132.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2009: Initial-condition sensitivity of western Pacific extratropical transitions determined using ensemble-based sensitivity analysis. Mon. Wea. Rev., 137, 33883406, https://doi.org/10.1175/2009MWR2879.1.

    • Search Google Scholar
    • Export Citation
  • Yokota, S., H. Seko, M. Kunii, H. Yamauchi, and H. Niino, 2016: The tornadic supercell on the Kanto Plain on 6 May 2012: Polarimetric radar and surface data assimilation with EnKF and ensemble-based sensitivity analysis. Mon. Wea. Rev., 144, 31333157, https://doi.org/10.1175/MWR-D-15-0365.1.

    • Search Google Scholar
    • Export Citation
Save
  • Ancell, B. C., and G. J. Hakim, 2007: Comparing adjoint- and ensemble-sensitivity analysis with applications to observation targeting. Mon. Wea. Rev., 135, 41174134, https://doi.org/10.1175/2007MWR1904.1.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., and A. A. Coleman, 2022: New perspectives on ensemble sensitivity analysis with applications to a climatology of severe convection. Bull. Amer. Meteor. Soc., 103, E511E530, https://doi.org/10.1175/BAMS-D-20-0321.1.

    • Search Google Scholar
    • Export Citation
  • Bednarczyk, C. N., and B. C. Ancell, 2015: Ensemble sensitivity analysis applied to a southern plains convective event. Mon. Wea. Rev., 143, 230249, https://doi.org/10.1175/MWR-D-13-00321.1.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. M., 2006: Pattern Recognition and Machine Learning. Springer, 738 pp.

  • Brown, B. R., and G. J. Hakim, 2015: Sensitivity of intensifying Atlantic hurricanes to vortex structure. Quart. J. Roy. Meteor. Soc., 141, 25382551, https://doi.org/10.1002/qj.2540.

    • Search Google Scholar
    • Export Citation
  • Chang, E. K. M., M. Zheng, and K. Raeder, 2013: Medium-range ensemble sensitivity analysis of two extreme Pacific extratropical cyclones. Mon. Wea. Rev., 141, 211231, https://doi.org/10.1175/MWR-D-11-00304.1.

    • Search Google Scholar
    • Export Citation
  • Errico, R. M., and T. Vukicevic, 1992: Sensitivity analysis using an adjoint of the PSU–NCAR mesoscale model. Mon. Wea. Rev., 120, 16441660, https://doi.org/10.1175/1520-0493(1992)120<1644:SAUAAO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Garcies, L., and V. Homar, 2009: Ensemble sensitivities of the real atmosphere: Application to Mediterranean intense cyclones. Tellus, 61A, 394406, https://doi.org/10.1111/j.1600-0870.2009.00392.x.

    • Search Google Scholar
    • Export Citation
  • Hacker, J. P., and L. Lei, 2015: Multivariate ensemble sensitivity with localization. Mon. Wea. Rev., 143, 20132027, https://doi.org/10.1175/MWR-D-14-00309.1.

    • Search Google Scholar
    • Export Citation
  • Hakim, G. J., and R. D. Torn, 2008: Ensemble synoptic analysis. Synoptic-Dynamic Meteorology and Weather Analysis and Forecasting: A Tribute to Fred Sanders, Meteor. Monogr., No. 33, Amer. Meteor. Soc., 147–162, https://doi.org/10.1175/0065-9401-33.55.147.

  • Hanley, K. E., D. J. Kirshbaum, N. M. Roberts, and G. Leoncini, 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112133, https://doi.org/10.1175/MWR-D-12-00013.1.

    • Search Google Scholar
    • Export Citation
  • Hill, A. J., C. C. Weiss, and B. C. Ancell, 2016: Ensemble sensitivity analysis for mesoscale forecasts of dryline convection initiation. Mon. Wea. Rev., 144, 41614182, https://doi.org/10.1175/MWR-D-15-0338.1.

    • Search Google Scholar
    • Export Citation
  • Ito, K., and C.-C. Wu, 2013: Typhoon-position-oriented sensitivity analysis. Part I: Theory and verification. J. Atmos. Sci., 70, 25252546, https://doi.org/10.1175/JAS-D-12-0301.1.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., R. L. Elsberry, and R. M. Errico, 1995: Evaluation of physical processes in an idealized extratropical cyclone using adjoint sensitivity. Quart. J. Roy. Meteor. Soc., 121, 13491386, https://doi.org/10.1002/qj.49712152608.

    • Search Google Scholar
    • Export Citation
  • Limpert, G. L., and A. L. Houston, 2018: Ensemble sensitivity analysis for targeted observations of supercell thunderstorms. Mon. Wea. Rev., 146, 17051721, https://doi.org/10.1175/MWR-D-17-0029.1.

    • Search Google Scholar
    • Export Citation
  • Ren, S., L. Lei, Z.-M. Tan, and Y. Zhang, 2019: Multivariate ensemble sensitivity analysis for Super Typhoon Haiyan (2013). Mon. Wea. Rev., 147, 34673480, https://doi.org/10.1175/MWR-D-19-0074.1.

    • Search Google Scholar
    • Export Citation
  • Saltelli, A., M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, and S. Tarantola, 2008: Global Sensitivity Analysis: The Primer. John Wiley and Sons, 304 pp.

  • Torn, R. D., 2010: Ensemble-based sensitivity analysis applied to African easterly waves. Wea. Forecasting, 25, 6178, https://doi.org/10.1175/2009WAF2222255.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2008: Ensemble-based sensitivity analysis. Mon. Wea. Rev., 136, 663677, https://doi.org/10.1175/2007MWR2132.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. J. Hakim, 2009: Initial-condition sensitivity of western Pacific extratropical transitions determined using ensemble-based sensitivity analysis. Mon. Wea. Rev., 137, 33883406, https://doi.org/10.1175/2009MWR2879.1.

    • Search Google Scholar
    • Export Citation
  • Yokota, S., H. Seko, M. Kunii, H. Yamauchi, and H. Niino, 2016: The tornadic supercell on the Kanto Plain on 6 May 2012: Polarimetric radar and surface data assimilation with EnKF and ensemble-based sensitivity analysis. Mon. Wea. Rev., 144, 31333157, https://doi.org/10.1175/MWR-D-15-0365.1.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 469 11 0
Full Text Views 3576 3344 190
PDF Downloads 435 162 16