Optimal Detection of Regional Trends Using Global Data

Stephen S. Leroy Department of Earth and Planetary Sciences, Harvard School of Engineering and Applied Science, Harvard University, Cambridge, Massachusetts

Search for other papers by Stephen S. Leroy in
Current site
Google Scholar
PubMed
Close
and
James G. Anderson Department of Earth and Planetary Sciences, Harvard School of Engineering and Applied Science, Harvard University, Cambridge, Massachusetts

Search for other papers by James G. Anderson in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

A complete accounting of model uncertainty in the optimal detection of climate signals requires normalization of the signals produced by climate models; however, there is not yet a well-defined rule for the normalization. This study seeks to discover such a rule. The authors find that, to arrive at the equations of optimal detection from a general application of Bayesian statistics to the problem of climate change, it is necessary to assume that 1) the prior probability density function (PDF) for climate change is separable into independent PDFs for sensitivity and the signals’ spatiotemporal patterns; 2) postfit residuals are due to internal variability and are normally distributed; 3) the prior PDF for sensitivity is uninformative; and 4) a continuum of climate models used to estimate model uncertainty gives a normally distributed PDF for the spatiotemporal patterns for the climate signals. This study also finds that the rule for normalization of the signals’ patterns is a simple division of model-simulated climate change in any observable quantity or set of quantities by a change in a single quantity of interest such as regionally averaged temperature or precipitation. With this normalization, optimal detection yields the most probable estimates of the underlying changes in the region of interest due to external forcings. Data outside the region of interest add information that effectively suppresses the interannual fluctuations associated with internal climate variability.

Corresponding author address: Stephen Leroy, Anderson Group, 12 Oxford St., Cambridge, MA 02138. Email: leroy@huarp.harvard.edu

Abstract

A complete accounting of model uncertainty in the optimal detection of climate signals requires normalization of the signals produced by climate models; however, there is not yet a well-defined rule for the normalization. This study seeks to discover such a rule. The authors find that, to arrive at the equations of optimal detection from a general application of Bayesian statistics to the problem of climate change, it is necessary to assume that 1) the prior probability density function (PDF) for climate change is separable into independent PDFs for sensitivity and the signals’ spatiotemporal patterns; 2) postfit residuals are due to internal variability and are normally distributed; 3) the prior PDF for sensitivity is uninformative; and 4) a continuum of climate models used to estimate model uncertainty gives a normally distributed PDF for the spatiotemporal patterns for the climate signals. This study also finds that the rule for normalization of the signals’ patterns is a simple division of model-simulated climate change in any observable quantity or set of quantities by a change in a single quantity of interest such as regionally averaged temperature or precipitation. With this normalization, optimal detection yields the most probable estimates of the underlying changes in the region of interest due to external forcings. Data outside the region of interest add information that effectively suppresses the interannual fluctuations associated with internal climate variability.

Corresponding author address: Stephen Leroy, Anderson Group, 12 Oxford St., Cambridge, MA 02138. Email: leroy@huarp.harvard.edu

1. Introduction

Optimal detection, otherwise called linear multipattern regression, has been the method of choice for attributing global change to specific causes, natural and anthropogenic. It is the preferred method for addressing problems of attribution of climate change in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). The method is described qualitatively in box 12.1 of the Third Assessment Report (Houghton et al. 2001) and has been presented rigorously elsewhere (Bell 1986; Hasselmann 1993; North et al. 1995; Hasselmann 1997). In the method, a spatiotemporal pattern for a climate signal is predicted by a climate model by subtracting a control run of the climate model from a run subjected to a forcing that was not included in the control. Care is taken to minimize the influence of natural variability in the signal pattern. Then the signal pattern—in both space and time—is multiplied by a scalar to fit a time series of data under the condition that the postfit residuals are consistent with the statistics of naturally occurring, interannual fluctuations of the climate system, as prescribed by a long control run of a climate model (Allen and Tett 1999). The final result is the multiplying scalar along with the uncertainty in its determination that, when taken together, yield a confidence level that a forced signal has been detected.

There are many contributors to uncertainty in optimal detection. Allen and Stott (2003) give a thorough presentation of these contributors. One of them arises from the dependence of the spatiotemporal patterns of externally forced signals on the climate model used to simulate them. The term is first presented in Eq. (2.10) of Bell (1986), is rigorously derived in Allen and Stott (2003), and was first applied in Huntingford et al. (2006). The uncertainty it introduces takes the form of an uncertainty covariance matrix in the spatiotemporal patterns of the signal, as generated by an ensemble of different runs of a climate model or multiple climate models. Intuitively, uncertainty because of the differences in signal pattern should not arise from models that produce the same pattern but with different overall amplitude; thus, the signals produced by different models must first be normalized before a signal pattern uncertainty covariance matrix is estimated. The derivations in Bell (1986) and Allen and Stott (2003) provide no prescription for the normalization of simulated signal patterns before their uncertainty covariance is computed, so Huntingford et al. (2006) implement a sensible but ultimately ad hoc normalization in their computations.

Because the results of optimal detection differ depending on the conditions of its application, and because optimal detection itself can be derived from the rules of conditional probability (Leroy 1998), the problem of normalization of signal patterns may be addressed by a rigorous application of Bayes’s theorem to the problem of optimal detection, using an ensemble of climate model runs to prescribe signal patterns. The same rigorous application may also ascribe physical meanings to the scalars used to multiply the signal patterns to obtain a best fit to data. In this paper, we present such a treatment for optimal detection, and along the way we point out the implicit assumptions that must be made to obtain the equations of optimal detection from a more general application of Bayes’s theorem to the problem of climate signal detection.

The second section of this paper contains a derivation of the equations of optimal detection using an ensemble of climate model runs that use the rules of conditional probability to determine the appropriate method of normalizing signal patterns produced by different model runs. The findings will lead to a more general interpretation of optimal detection than simple attribution to external influences. One additional interpretation is that global datasets can be used to find the underlying climate trends in regional signals for noisy time series. The third section presents two illustrative applications of optimal detection to regional trends, made feasible by two distinct normalizations of the signal patterns as produced by an ensemble of climate model runs. The results will demonstrate the nontriviality of the issue of signal pattern normalization. Finally, the fourth section contains a summary of the preceding sections and a discussion of the implications of this theoretical work.

2. Normalizing signal patterns

The equations of optimal detection, or linear multipattern regression, can be derived from Bayes’s theorem given specific assumptions. The formulation of the uncertainty in the posterior of optimal detection due to uncertainty in the signal pattern given by Bell (1986) can also be derived from Bayes’s theorem given further assumptions. First, the filtering equation of optimal detection of a linear trend is
i1520-0442-23-16-4438-e1
The matrix 𝗙 contains the optimal fingerprints, with each column associated with a single signal. The vector Δd is a measurement of climate change in a dataset. It can take the form of the average of one period of time subtracted from the average of a later period of time, or it can take the form of a trend in time computed by linear regression of a time series of data. The coefficients Δα are the multipliers of the signals thought to emerge in the data as a response of climate to external forcing otherwise obscured by natural variability. A continuum of values of the coefficients is possible; here, Δαmp are the “most probable” values though.
Optimal detection is based on a simple linear model of the data,
i1520-0442-23-16-4438-e2
the signal patterns being the columns of 𝗦 and the linear coefficients the elements of Δα. The postfit residual is dn, a single realization of a random system, itself described by a Gaussian distribution with covariance Σn. There are no demands on the dimensionality of the data vector nor the data types; in fact, it is possible to mix data types in the construction of d. The optimal fingerprints are
i1520-0442-23-16-4438-e3
and the posterior uncertainty covariance is
i1520-0442-23-16-4438-e4
In the case of models being perfect prescribers of signal patterns, the residuals dn are simply the fluctuations of internal “natural” variability. These are the equations of optimal detection. See Allen and Stott (2003) for an in-depth explanation and exploration of optimal detection.
The form of Bayes’s theorem relevant to deriving the equations of optimal detection is
i1520-0442-23-16-4438-e5
It states that the probability density function for change Δg in an observation vector due solely to a specific external forcing is directly proportional to the product of the Bayesian “likelihood” function and the Bayesian prior function. In this case, the likelihood function is the first term in the numerator of Eq. (5) and is the probability density function (PDF) of obtaining data Δd if the true climate change is Δg. The prior is the second term in the numerator of Eq. (5) and is the PDF for obtaining climate change Δg using model Mμ. See Sivia (2006) for a tutorial on Bayesian data analysis. Model Mμ can be used to determine the secular trend of climate given a specified external forcing. Implicit in this approach is that a secular trend or perturbation from a climate equilibrium state exists in response to an external forcing, distinct from other fluctuations of a climate system in equilibrium, and is not directly observable. This is arguably the central assumption of climate change research. The internal fluctuations on interannual time scales obscure the underlying trends on decadal time scales, and so it is advantageous to suppress the internal variability when evaluating the numerator of Eq. (5). When diagnosing climate models, this is best done by averaging together multiple forced runs of the same climate model.

The following subsections state the assumptions needed to obtain the equations of optimal detection from Bayesian inference (sections 2a2d) and then state how normalization of signals is accomplished (section 2e).

a. Assumption of a separable prior

To arrive at the equations for optimal detection, climate change is written as a linear combination of k components of climate change, k ≥ 1, and those changes are linear in well defined scalars αi:
i1520-0442-23-16-4438-e6
The components of climate change can be linear trends associated with anthropogenic climate change, the sinusoidal variability resulting from the solar cycle, or any other signal that represents a change from a long-term average. Without loss of generality, Eq. (5) becomes
i1520-0442-23-16-4438-e7
The ith column of the Jacobian ∂g/∂α is ∂g/∂αi. The first assumption of optimal detection is that the prior is separable; that is, the prior in Eq. (7) becomes
i1520-0442-23-16-4438-e8
To arrive at the equations of optimal detection, one must assume that the spatiotemporal patterns with which climate signals emerge are independent of the sensitivity of climate models.
The components of climate change ∂g/∂α are precisely known properties of the climate model Mμ. Thus, the contingency of ∂g/∂α upon model Mμ implies that the corresponding prior function—the first term on the right of Eq. (8)—is singular at ∂g/∂α = (∂g/∂α)Mμ. Assuming the prior is separable, the Bayesian formulation for climate change detection and attribution becomes
i1520-0442-23-16-4438-e9
The δ[···] is a Dirac delta function. That the Bayesian prior is separable is the first assumption.

b. Assumption of normally distributed residuals

The second assumption is that the only departures of the data from the underlying trend of climate change are those of internal variability of the climate and that those departures are normally distributed: dn ~ N(0, Σn). Read this notation as—the random fluctuations of natural variability dn have a Gaussian distribution with zero mean (〈dn〉 = ) and covariance 〈dn dnT〉 = Σn. Applying the model for the data, Eq. (2), the likelihood function is also normally distributed
i1520-0442-23-16-4438-e10
For the sake of brevity, define 𝗦 ≡ (∂g/∂α)Mμ with si = (∂g/∂αi)Mμ the ith column of matrix 𝗦. The assumption of normally distributed residuals is explicit elsewhere in the literature on optimal detection and is the second assumption here.

c. Assumption of uninformed sensitivity

The third assumption is that nothing is known about the sensitivity—transient, equilibrium, or otherwise—of the climate without data. The prior PDF for the signal strengths Δα is an uninformative or flat one:
i1520-0442-23-16-4438-e11
All intuition concerning possible climate change is abandoned before analyzing the data. These three assumptions are sufficient to derive the equations of optimal detection. To do so, insert Eqs. (10) and (11) into Eq. (8) and find the maximum and the width of the posterior distribution function in Δα. The solutions are Eqs. (1), (3), and (4). We have not yet accounted for uncertainty in signal patterns.

At this stage, under the condition of just one contingent model Mμ, optimal detection only depends on how one separates the climate change signal into multiple components and not on how one defines the scalars αi. For the problem of detecting anthropogenic warming in global surface air temperature, for instance, the climate signal can be separated into a component due to anthropogenic forcing, another due to volcanic aerosol, another due to sulfate aerosol, and another due to the solar cycle. The scalar assigned to each can be nondimensional, indicative of the global average surface air temperature change or some other quantity. Application of the equations of optimal detection to obtain a most probable Δα and its error covariance Σα will differ because of different definitions of α, but the confidence levels of detection will remain the same. This can be checked simply by scaling each αi by qi by αi = qiαi substituting αi for αi in the equations of optimal detection, and computing confidence levels of detection.

d. Assumption of a continuum of models

In the context of an ensemble of climate models , the posterior for the scalar changes Δα is the weighted sum of the posterior distribution function for each model,
i1520-0442-23-16-4438-e12
where the ensemble of climate models is composed of independent climate models Mμ. The weights Pd, Mμ) in this equation are new; each is the joint probability of the data and the model Mμ. Some models’ prescriptions of signal pattern will be more consistent with the data than others, and those models will be preferentially weighted. With the law of multiplication for conditional probabilities, it is related to the denominator on the right of Eq. (7):
i1520-0442-23-16-4438-e13
Neither is anything assumed about the relative quality of models nor any advantage given to a model before beginning data analysis, so a flat prior for models P(Mμ) = 1 is appropriate. Putting together Eqs. (9), (11), (12), and (13) yields the conditional probability for underlying trends of the climate given data and an ensemble of models:
i1520-0442-23-16-4438-e14
It is the sum of all the posteriors described by Eq. (9) for all of the models in ensemble .
The final assumption is that there is a continuum of models that can be used to describe the PDF of signal patterns, and that the PDF for 𝗦 is Gaussian. The PDF has mean and error covariance Σ𝗦:
i1520-0442-23-16-4438-e15
Consequently, the summation in Eq. (14) becomes
i1520-0442-23-16-4438-e16
In this PDF, all of the elements in the space of 𝗦 are considered. Executing this convolution using the PDF for the data given by Eq. (10) yields a posterior PDF for Δα with the most probable values given by
i1520-0442-23-16-4438-e17a
where
i1520-0442-23-16-4438-e17b
i1520-0442-23-16-4438-e17c
i1520-0442-23-16-4438-e17d
The square brackets in Eq. (17d) […] indicate an ensemble average over , and δ𝗦 is the departure of the fingerprints 𝗦μ for a given model Mμ from the intermodel mean . The expression Σs in Eq. (17d) can be derived from Eq. (16) using a nontrivial coordinate transformation, which calls for the computation of the marginal PDFs of the subspace of Σ𝗦 described by 𝗦Δα. The rest of the space of Σ𝗦 can be considered “nuisance” parameters and are easily integrated over as in Sivia (2006).

Equations (17a)(17d) are the equations of optimal detection, with an accounting for uncertainty in the spatiotemporal patterns of climate signals derived in the context of an ensemble of climate models using Bayesian formalism. For comparison, see Eq. (2.10) in Bell (1986), section 3 of Allen and Stott (2003), and Eqs. (1)–(3) of Huntingford et al. (2006), which precisely give Eq. (17c).

e. Normalization of signal shapes

It is now possible to define how to normalize signal patterns in the process of accounting for model uncertainty. The normalization is defined according to Eq. (6). The underlying climate change for any time series of data can be written as a linear combination of multiple signals, each of which can be normalized by arbitrary and completely general scalars that are strongly related to the existence of climate trends. The obvious application is to regional detection and attribution. For example, the normalization scalars Δα can be defined as regional surface air temperature changes that result from different external forcings. This places no demand on the data vector d, and so the data field is completely general. For example, the data field can be anything from in situ temperature data to calibrated radiances obtained remotely. It may include the region of interest. Optimal detection with an accounting for model uncertainty turns out to be a method to extract information from arbitrary and mixed-type datasets to infer the underlying climate trends for any noisy quantity in the climate system.

Hereafter, we call a normalized signal pattern a climate fingerprint. That there is a PDF of fingerprints with finite width, as defined by a continuum of models, is a consequence of the uncertain physics in climate models, which leads to uncertain connections between a climate response (free of natural variability) and observable data types. The more certain the physical relationships are between a climate trend in a particular variable and an observed data field, the smaller the width of the PDF in fingerprints will be, and optimal detection will consider internal variability as the dominant source of residuals. Likewise, the less certain the physical relationships are between a climate trend in a particular variable and an observed data field, the greater the width of the PDF in fingerprints will be, and optimal detection will consider fingerprint uncertainty as the dominant source of residuals.

We call the columns of the matrix 𝗦 the climate fingerprints and the columns of the matrix 𝗙mp the “optimal fingerprints.” The optimal fingerprints can be thought of as linear spatiotemporal filters trained by an ensemble of climate models to infer underlying climate trends. They are the contravariant vectors of the climate fingerprints because , the identity matrix 𝗜 having the number of signals to be sought as its rank.

3. Illustrative examples

Here, Eqs. (17a)(17d) are applied using the same data field but target different regions to illustrate how the method described in the preceding section works. We use maps of Northern Hemisphere surface air temperature trends to infer the trend associated with anthropogenic forcing in the regions of the central United States and Northern Europe.

We use the model output of the World Climate Research Programme’s (WCRP) Coupled Model Intercomparison Project, phase 3 (CMIP3) multimodel dataset. It provides 55 realizations of transient climate response to the Special Report on Emissions Scenarios (SRES) A1B forcing from 24 different climate models. With just a single external forcing, the signal is a simple vector as is the optimal fingerprint (𝗙mp = fmp). It also provides multiple preindustrial control runs. We define a scalar change in the form of a climate trend:
i1520-0442-23-16-4438-e18
The optimal fingerprint fmp is determined using the outputs of all the CMIP3 models but one, and the output of that one is used as a stand-in for data to test the analysis method. This method is sometimes referred to as the perfect model test. The lone element of the scalar change Δα is effectively the regional average temperature trend dTmp,region/dt, and the data change Δd is the long-term trend in the data field dd/dt. The major operation of Eq. (18) is linear. Because the trend in the data field dd/dt is the time derivative of a time series of data d(t), the underlying trend in regional surface air temperature can be written also as a time series Tmp,region(t):
i1520-0442-23-16-4438-e19
Thus, it is appropriate to define , true to within an additive constant. This is not the same as the time series of regional average surface air temperature Tregion(t), but it is the inferred most probable estimate of the regional surface air temperature trend associated with SRES A1B forcing; that is, the climate response in Tregion without natural variability.
For the central United States, we invent a scenario of a 10-yr time series of data. The fingerprint s for each model is computed by dividing the 40-yr trend in Northern Hemisphere surface air temperature by the 40-yr trend in the regionally restricted central U.S. surface air temperature [cf. Eq.(6)]. We use 40-yr trends instead of 10-yr trends to reduce the error in s due to the interannual variability internal to each climate model. A mean s is computed by averaging together the s over the ensemble of CMIP3 models . The covariance of fingerprints s is subsequently computed according to Eq. (17d). Internal variability is computed from a long preindustrial control run of a climate model taken from CMIP3 and represents the range of trends in dd/dt that can be realized by internal variability without any anthropogenic climate forcing over 10 yr. If the year-to-year internal variability of Northern Hemisphere annual average temperature is Σn, then the internal variability of a 10-yr trend in Northern Hemisphere annual average temperature Σdn/dt is related to Σn approximately by
i1520-0442-23-16-4438-e20
taken from Eqs. (6)–(10) of Leroy et al. (2008b). The time constant τvar is the persistence time of the major modes of variability of Northern Hemisphere surface air temperature. We approximate it as 1.4 yr, generally reflecting the variability associated with the El Niño–Southern Oscillation (ENSO). In our examples, Σr = Σs + Σdn/dt in Eq. (17c).

The diagonal elements of Σr take the form of expected squared postfit residuals. Figure 1 shows the diagonal elements of Σs and Σdn/dt for finding underlying trends of climate change in central U.S. surface air temperature using 10 yr of Northern Hemisphere surface air temperature data. Internal variability is greatest in the Arctic, with a noticeable contribution from the Pacific Ocean due to ENSO variability. Fingerprint uncertainty is also largest in the Arctic, conveying the general lack of utility of Arctic temperature trends in providing information on lower-latitude temperature trends. Throughout, the contribution of internal variability to Σr far outweighs the contribution of signal pattern uncertainty.

The singularity of the matrix Σr requires special care. The equation for the optimal fingerprint fmp can be written in the following inner product form:
i1520-0442-23-16-4438-e21
where eν and λν are the νth eigenvector and eigenvalue of matrix Σr, and 〈… , …〉 is an inner product defined by the same rule used to compute the eigenvectors and eigenvalues, 〈Σr, eν〉 = λνeν. The summations must be truncated at some finite m. The relationship between the fingerprints and optimal fingerprints becomes 〈fmp, s〉 = 1.

Figure 2 contains plots of the optimal fingerprints fmp for the cases of the central United States and Northern Europe. Each is the optimal fingerprint—the set of coefficients used to multiply a decadal-time-scale trend in the field of Northern Hemisphere surface air temperature—to obtain a most probable estimate for a regional average surface air temperature trend associated with SRES A1B forcing. In the case of the central United States, the optimal fingerprint heavily weights toward the central United States itself, which is expected when the historical regional trend contains strong information on the underlying trend associated with climate change. The component of the fingerprint external to the central United States then contains information that reduces the “noise” of internal variability in the central United States. In the case of Northern Europe, the optimal fingerprint has little weight in Northern Europe. Rather, the optimal fingerprint for Northern Europe heavily weights toward temperature trends in the Arctic, unlike the case of the central United States. The optimal fingerprints for both the central United States and Northern Europe, though, are positive throughout most of the Northern Hemisphere, indicating that both regions can be expected to have positive (negative) trends when the Northern Hemisphere also has a positive (negative) trend. Most importantly, the optimal fingerprints differ greatly, depending on how they are normalized, when accounting for fingerprint uncertainty in optimal detection.

Figure 3 contains plots of the actual regional average surface air temperature, Tregion(t) and the most probable inference of the component associated with a long-term trend due to climate forcing SRES A1B, Tmp,region(t). A simulated truth dataset is the first 10 yr of output of a CMIP3 model subjected to SRES A1B forcing and not included in the formulation of the optimal fingerprint fmp. The most probable “climate” trend is determined by linear regression of Tmp,region(t) over the first 10 yr of the forced model run. The uncertainty in the trend is the uncertainty determined by standard linear regression error analysis in Tmp,region(t) (von Storch and Zwiers 1999). Using data outside the region of interest suppresses interannual internal variability by a factor of 10 in the central United States and by a factor of 7 in Northern Europe.

To demonstrate the near-term climate-forecasting capability of this method, Fig. 3 also shows the evolution of the area-averaged surface air temperature for years 10–20, taken from the same model run that produced the first 10 yr of data. The 10-yr climate prediction is simply an extrapolation of the linear regression of Tmp,region(t), with a 1-standard deviation error envelope. With 10 yr of data, regional average temperature for the central United States and Northern Europe can be projected with an uncertainty of 0.1 K 10 yr into the future. To compute a PDF for a future prediction, one must convolve the PDF for the climate projection with a PDF describing interannual internal variability for that region.

For Figs. 2 and 3 we computed 21 optimal fingerprints, using 21 different truncations, m = 20 through m = 40 and averaged the resulting fingerprints together. The slope of the Tmp,region(t) time series is almost completely independent of the truncations m = 20 through m = 40. After averaging optimal fingerprints together, only the most salient features of the optimal fingerprints remain. Features that are averaged out are small in spatial scale and associated with higher-order eigenvectors of Σr. Those eigenvectors have small eigenvalues λμ, and so they explain little interannual variability in the time series Tmp,region(t).

4. Summary

In optimal fingerprinting, it is necessary to account for uncertainty in the spatiotemporal pattern of evolution of externally forced climate signals, but no rule has been given for normalizing them prior to deducing their uncertainty. Normalization is required so as to avoid penalizing models’ sensitivity differences. Because optimal detection can be formulated using Bayesian statistics, and because an ensemble of models hints at multiple levels of inference, we use Bayesian inference to find a rigorous rule for signal normalization. In the process, we determine that four assumptions are necessary to arrive at the standard equations of optimal fingerprinting that account for signal pattern uncertainty. They are as follows:

  • (i) The prior in signal pattern and sensitivity is separable [cf. Eq. (8)].

  • (ii) Postfit residuals are due to internal variability and are normally distributed [cf. Eq. (10)]. This assumption has been made explicit elsewhere.

  • (iii) The prior in sensitivity is uninformative [cf. Eq. (11)].

  • (iv) A continuum of models produces a normal distribution for signal patterns [cf. Eq. (14)].

Moreover, after setting out in search of a rigorous normalization of signals when accounting for uncertainty in signal pattern, the paradox arises that the normalization is nonunique. The paradox is resolved by first recognizing that the Δα is nonuniquely defined. When uncertainty in the signal pattern is disregarded, the choice of Δα is irrelevant in optimal detection. When uncertainty in the signal pattern is considered, however, it is first necessary to define Δα according to a specific interest. There are an infinite number of possible definitions of Δα. Once the choice for Δα is made, normalization of signal patterns becomes unique. According to Eqs. (6) and (17d), one must normalize modeled trends of data Δg by modeled trends in Δα. Qualitatively, signal pattern uncertainty is best understood as a quantification of uncertain model physics: depending on what climate response is being sought, the uncertain physics of a climate model may be more or less relevant to the outcome.

In two illustrative examples, we have shown that when one normalizes the trend in the Northern Hemisphere surface air temperature field by a regional surface air temperature trend, the optimal detection result will be the most probable estimate of the underlying climate trend in that region associated with a particular external forcing. Information gained from the Northern Hemisphere outside the region of interest serves to dramatically reduce the fluctuations of internal variability within the region of interest.

The illustrative examples point toward regional climate signal inference and near-term climate forecasting as clear applications for this method of analysis. Of special note is the work of Kharin and Zwiers (2002), which aims at a methodology for attributing regional trends to specific external forcings. Their equations are also those of optimal detection, but the philosophical underpinnings of the approach require that it be restricted to data within the region of interest. Our work does address Kharin and Zwiers (2002) in showing how to normalize fingerprints when accounting for model uncertainty, but our work goes beyond the task of attribution. It explains how to handle arbitrary datasets that can extend well beyond the region of interest to arrive at a most probable inference of the underlying climate trends in a particular region.

The method presented here has already been applied to demonstrate how a time series of infrared spectra can be used to place strong constraints on long-wave feedbacks in the tropics Leroy et al. (2008a). Although the third assumption most likely only inhibits the precision of the results of optimal detection, the invalidity of any of the other assumptions to a particular problem will probably lead to a breakdown of the method. For example, the method seems well suited to regional surface air temperature trends (as in the illustrative examples), but it may not be as well suited to precipitation because of strong relationships between sensitivity and spatial patterns of precipitation change. In that case, regional detection and near-term projection are better accomplished using a Bayesian approach, which does not require the previously stated assumptions.

Acknowledgments

We acknowledge the modeling groups, the Program for Climate Model Diagnosis and Intercomparison (PCMDI), and the WCRP’s Working Group on Coupled Modelling (WGCM) for their roles in making the WCRP CMIP3 multimodel dataset available. Support of this dataset is provided by the Office of Science, U.S. Department of Energy. We wish to thank Richard Goody for many useful conversations on the topic of testing climate models. This work was supported by Grant ATM-0755099 of the National Science Foundation.

REFERENCES

  • Allen, M., and S. Tett, 1999: Checking for model consistency in optimal fingerprinting. Climate Dyn., 15 , 419434.

  • Allen, M., and P. Stott, 2003: Estimating signal amplitudes in optimal fingerprinting. Part I: Theory. Climate Dyn., 21 , 477491.

  • Bell, T., 1986: Theory of optimal weighting to detect climate change. J. Atmos. Sci., 43 , 16941710.

  • Hasselmann, K., 1993: Optimal fingerprints for the detection of time-dependent climate change. J. Climate, 6 , 19571971.

  • Hasselmann, K., 1997: Multi-pattern fingerprint method for detection and attribution of climate change. Climate Dyn., 13 , 601611.

  • Houghton, J., Y. Ding, D. Griggs, M. Noguer, P. van der Linden, X. Dai, K. Maskell, and C. Johnson, Eds. 2001: Climate Change 2001: The Scientific Basis. Cambridge University Press, 881 pp.

    • Search Google Scholar
    • Export Citation
  • Huntingford, C., P. Stott, M. Allen, and F. Lambert, 2006: Incorporating model uncertainty into attribution of observed temperature change. Geophys. Res. Lett., 33 , L05710. doi:10.1029/2005GL024831.

    • Search Google Scholar
    • Export Citation
  • Kharin, V., and F. Zwiers, 2002: Climate predictions with multimodel ensembles. J. Climate, 15 , 795799.

  • Leroy, S., 1998: Detecting climate signals: Some Bayesian aspects. J. Climate, 11 , 640651.

  • Leroy, S., J. Anderson, J. Dykema, and R. Goody, 2008a: Testing climate models using thermal infrared spectra. J. Climate, 21 , 18631875.

    • Search Google Scholar
    • Export Citation
  • Leroy, S., J. Anderson, and G. Ohring, 2008b: Climate signal detection times and constraints on climate benchmark accuracy requirements. J. Climate, 21 , 841846.

    • Search Google Scholar
    • Export Citation
  • North, G., K. Kim, S. Shen, and J. Hardin, 1995: Detection of forced climate signals. Part I: Filter theory. J. Climate, 8 , 401408.

  • Sivia, D., 2006: Data Analysis: A Bayesian Tutorial. Oxford University Press, 246 pp.

  • von Storch, H., and F. Zwiers, 1999: Statistical Analysis in Climate Research. Cambridge University Press, 484 pp.

Fig. 1.
Fig. 1.

(left) Internal variability and (right) fingerprint uncertainty for the central United States. The data field is a map of 10-yr surface air temperature trends in the Northern Hemisphere; the signal pattern is normalized to the surface air temperature trend in the central United States. The contributions of Σdn/dt and Σs to Σr, respectively, are shown. Only the diagonal elements of each are shown. The units are (K decade−1)2.

Citation: Journal of Climate 23, 16; 10.1175/2010JCLI3550.1

Fig. 2.
Fig. 2.

Optimal fingerprints fmp for finding underlying climate trends in the central United States and in Northern Europe. The gray shading is a contour map of the optimal fingerprints by which surface air temperature trends in the Northern Hemisphere are to be multiplied to find underlying climate trends in surface air temperature in (a) the central United States and (b) Northern Europe. A red box shows the definition of the target region. The dashed contour is the zero level. Darker means more positive; lighter means more negative.

Citation: Journal of Climate 23, 16; 10.1175/2010JCLI3550.1

Fig. 3.
Fig. 3.

Time series of regional average surface air temperature and inferred area average temperature associated with climate forcing. The 10-yr time series of regional average surface temperature, as simulated by an independent climate model, is shown in red; the subsequent evolution of the regional average surface air temperature is shown in light gray; the inferred regional average temperature associated with the external forcing SRES A1B is shown in black; and the ±1-sigma error envelope, deduced by linear regression, is the light-blue-shaded region. All numbers are representative of regional surface air (2 m) temperature in (a) the central United States and (b) Northern Europe.

Citation: Journal of Climate 23, 16; 10.1175/2010JCLI3550.1

Save
  • Allen, M., and S. Tett, 1999: Checking for model consistency in optimal fingerprinting. Climate Dyn., 15 , 419434.

  • Allen, M., and P. Stott, 2003: Estimating signal amplitudes in optimal fingerprinting. Part I: Theory. Climate Dyn., 21 , 477491.

  • Bell, T., 1986: Theory of optimal weighting to detect climate change. J. Atmos. Sci., 43 , 16941710.

  • Hasselmann, K., 1993: Optimal fingerprints for the detection of time-dependent climate change. J. Climate, 6 , 19571971.

  • Hasselmann, K., 1997: Multi-pattern fingerprint method for detection and attribution of climate change. Climate Dyn., 13 , 601611.

  • Houghton, J., Y. Ding, D. Griggs, M. Noguer, P. van der Linden, X. Dai, K. Maskell, and C. Johnson, Eds. 2001: Climate Change 2001: The Scientific Basis. Cambridge University Press, 881 pp.

    • Search Google Scholar
    • Export Citation
  • Huntingford, C., P. Stott, M. Allen, and F. Lambert, 2006: Incorporating model uncertainty into attribution of observed temperature change. Geophys. Res. Lett., 33 , L05710. doi:10.1029/2005GL024831.

    • Search Google Scholar
    • Export Citation
  • Kharin, V., and F. Zwiers, 2002: Climate predictions with multimodel ensembles. J. Climate, 15 , 795799.

  • Leroy, S., 1998: Detecting climate signals: Some Bayesian aspects. J. Climate, 11 , 640651.

  • Leroy, S., J. Anderson, J. Dykema, and R. Goody, 2008a: Testing climate models using thermal infrared spectra. J. Climate, 21 , 18631875.

    • Search Google Scholar
    • Export Citation
  • Leroy, S., J. Anderson, and G. Ohring, 2008b: Climate signal detection times and constraints on climate benchmark accuracy requirements. J. Climate, 21 , 841846.

    • Search Google Scholar
    • Export Citation
  • North, G., K. Kim, S. Shen, and J. Hardin, 1995: Detection of forced climate signals. Part I: Filter theory. J. Climate, 8 , 401408.

  • Sivia, D., 2006: Data Analysis: A Bayesian Tutorial. Oxford University Press, 246 pp.

  • von Storch, H., and F. Zwiers, 1999: Statistical Analysis in Climate Research. Cambridge University Press, 484 pp.

  • Fig. 1.

    (left) Internal variability and (right) fingerprint uncertainty for the central United States. The data field is a map of 10-yr surface air temperature trends in the Northern Hemisphere; the signal pattern is normalized to the surface air temperature trend in the central United States. The contributions of Σdn/dt and Σs to Σr, respectively, are shown. Only the diagonal elements of each are shown. The units are (K decade−1)2.

  • Fig. 2.

    Optimal fingerprints fmp for finding underlying climate trends in the central United States and in Northern Europe. The gray shading is a contour map of the optimal fingerprints by which surface air temperature trends in the Northern Hemisphere are to be multiplied to find underlying climate trends in surface air temperature in (a) the central United States and (b) Northern Europe. A red box shows the definition of the target region. The dashed contour is the zero level. Darker means more positive; lighter means more negative.

  • Fig. 3.

    Time series of regional average surface air temperature and inferred area average temperature associated with climate forcing. The 10-yr time series of regional average surface temperature, as simulated by an independent climate model, is shown in red; the subsequent evolution of the regional average surface air temperature is shown in light gray; the inferred regional average temperature associated with the external forcing SRES A1B is shown in black; and the ±1-sigma error envelope, deduced by linear regression, is the light-blue-shaded region. All numbers are representative of regional surface air (2 m) temperature in (a) the central United States and (b) Northern Europe.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 333 109 11
PDF Downloads 110 36 6