## 1. Introduction

The possibility of unexpectedly extreme climate change may be crucial for analyses that aim to operationalize policy targets or evaluate policy options. The tails of the temperature change distributions used in these analyses may therefore have special importance. For instance, policymakers aiming to avoid dangerous climate change now often focus on temperature targets such as avoiding 2°C of warming relative to preindustrial levels (e.g., Copenhagen Accord 2009; Major Economies Forum 2009). Because of the uncertain connection between emission paths and temperature change, determining the implications for allowable emission paths requires defining the acceptable chance of missing the temperature targets. With benchmark risk metrics, allowable emission paths should have less than a 10% chance of overshooting the target, but such assessments require temperature change distributions that include the types of uncertainty important for tail probabilities.

The tails of temperature change distributions may also matter for economic assessments of greenhouse gas (GHG) policies because damages and utility are both nonlinear in temperature change (e.g., Weitzman 2009). Willingness to pay to reduce the risks of climate change may therefore be sensitive to the positive tail of the climate sensitivity distribution (Newbold and Daigneault 2009). Further, economic assessments may respond to the pervasive uncertainty in integrated assessment models of the economy and climate by forgoing the calculation of optimal emission paths in favor of determining the cost-effective actions needed to meet exogenous GHG constraints (Ackerman et al. 2009; D. Lemoine et al. 2010, unpublished manuscript). These exogenous constraints may be determined by tolerance for climate change risks, which would again require temperature change distributions with well-characterized tails.

The positive tail of a temperature change distribution may be sensitive to types of uncertainty often excluded by previous work on temperature change probabilities. Much of the tail uncertainty is driven not just by uncertainty about the best climate model to use or about the best way to parameterize a given model but by uncertainty about features common across models, about our understanding of climate processes, and about how earth system processes in a future warming world may differ from those in the past periods for which we have data. After describing previous work and outlining the approach to feedback analysis, I propose and apply a hierarchical Bayes framework for developing posterior distributions for feedback factors that explicitly account for uncertainty about model completeness and shared structural biases. I then show how the implied temperature change distributions could inform risk assessments, policy targets, and future research into feedback processes.

## 2. Previous approaches to developing distributions for temperature change

A number of researchers have developed probability distributions for climate sensitivity, which gives the equilibrium temperature change produced by doubling carbon dioxide (CO_{2}) concentrations from their preindustrial level of approximately 280 ppm.^{1} Most studies have reported a most likely value between 2° and 3.5°C, a 5% lower limit between 1° and 2°C, and an uncertain upper limit that often exceeds 6°C (Knutti and Hegerl 2008). The current paper estimates posterior distributions for a parameter that is very similar to climate sensitivity but differs in allowing the carbon cycle to respond to the changing temperatures induced by exogenously doubled CO_{2} concentrations.

Knutti and Hegerl (2008) reviewed the main approaches that previous studies have taken toward constraining climate sensitivity: they have varied parameters in general circulation models (GCMs) and compared the results to climatic observations; they have estimated climate sensitivity from instrumental period data and from the longer, more variable time series available from paleoclimatic data; and the two studies of Annan and Hargreaves (2006) and Hegerl et al. (2006) have combined information from several constraints spanning both the instrumental and paleoclimatic records. Each of these approaches did not include several sources of uncertainty that may be important for tail risks because they did not explicitly include the possibility that models share biases or that past climates may be imperfect proxies for future climate change. In contrast to many previous studies (see Tebaldi and Knutti 2007; Knutti et al. 2009), the proposed hierarchical Bayes methods recognize the possibility of structural biases shared across models, which limits the information gain from an unbounded increase in the number of models (cf. Berliner and Kim 2008). This statistical framework also includes uncertainty about climate models’ completeness and about the similarity of the present and future higher-GHG world to the worlds represented by past climate observations.

## 3. Feedback analysis

The total temperature change from an increase in GHG concentrations depends not just on the direct effect of trapping additional outgoing infrared radiation but also on how the wider earth system responds to changing temperatures. For instance, temperature-induced changes in sea ice extent, vegetation, and water vapor content affect temperature by changing the earth’s reflectivity and its ability to trap outgoing heat. Such changes are feedbacks that may amplify or diminish the effect of the direct radiative forcing (e.g., Roe 2009). Because total temperature change can be highly sensitive to estimates of aggregate feedbacks, relatively small uncertainty about each feedback can translate into much greater uncertainty about total temperature change and into a significant possibility of extreme temperature change (e.g., Roe and Baker 2007).

Roe (2009) described the framework for linear feedback analysis adopted here. As will be seen in the appendix, it is important to be clear about the system within which feedbacks operate, including the subsystem (known as the reference system or open system) to which feedbacks return output as input (Stephens 2005). Feedbacks are only meaningful in relation to a reference system that defines what happens in the absence of feedbacks. The reference system in the case of climate change is usually a blackbody planet that responds to a sustained increase in radiative forcing Δ*R _{f}* (such as from increased GHG concentrations) by adjusting its atmospheric radiation balance until it reaches a new equilibrium with a higher temperature and correspondingly greater outgoing radiation. The change in temperature for the blackbody planet in response to Δ

*R*is Δ

_{f}*T*

_{0}=

*λ*

_{0}Δ

*R*for some

_{f}*λ*

_{0}determined by the Stefan–Boltzmann law and atmospheric characteristics. This blackbody representation does not correspond to actual expectations of temperature change because the earth system includes processes that affect radiative forcing in the course of responding to temperature change. The temperature change occurring in the total earth system (or closed system) depends on how these feedbacks amplify or decrease the reference system’s temperature change.

*K*feedback processes changes the radiative forcing by an amount proportional to the temperature change:

*i*only affects feedback

*j*via its effect on temperature change. Rearranging, we have

*k*is

*f*=

_{k}*λ*

_{0}

*c*, and the aggregate feedback factor

_{k}*F*is the sum of the individual feedback factors:

*k*corresponds to a climate field

*α*. This climate field changes in response to temperature, and changes in the climate field affect radiative forcing. Let Δ

_{k}*R*be the change in radiative forcing due to the temperature-induced changes in the

_{α}*K*climate fields. Then the Taylor series expansion yields

*f*are now defined as follows:

_{k}*f*(and

_{k}*F*as well) is a nondimensional measure that, when positive, can be interpreted as the fraction of the complete earth system warming due to that particular feedback process. If

*F*≥ 1, then feedbacks would be responsible for 100% or more of the complete system warming, which is a nonsensical result indicating problems with the feedback model due to misunderstanding the reference system and feedback processes or due to the omission of countervailing negative feedbacks.

The present application of feedbacks potentially suffers from three complicating features: time scales, nonlinearities, and interactions. First, feedbacks differ in their effects over different time scales and in their speed. Water vapor feedback, for instance, may be negative over short time scales but positive over longer time scales (Hallegatte et al. 2006). Differences in speed matter for transient climate change, but this study looks only at equilibrium climate change. Second, feedbacks are often assumed to be linear, but they are clearly nonlinear over some sufficiently large ranges (e.g., Colman et al. 1997; Colman and McAvaney 2009). In principle, nonlinearities can be included by deriving feedbacks via a second-order Taylor expansion (e.g., Roe 2009). Third, the crucial assumption from, for instance, Eqs. (1) and (3), that feedbacks are independent requires that the only way in which feedback processes interact is through surface temperature change. However, water vapor interacts with the lapse rate and clouds interact with water vapor, surface albedo, soil moisture, and the lapse rate (Stephens 2005; Bony et al. 2006). These interactions can be included by modifying Eq. (3) to allow nonzero cross partials in a second-order Taylor series expansion of *dR _{α}*. Bony et al. (2006) suggested that nonlinearities and interactions may not be significant for moderate climate change, but because the current paper’s results are meant to represent the possibility of extreme climate change, nonlinearities and interactions may provide opportunities for future extensions.

## 4. Methods: Hierarchical Bayes framework

Hierarchical methods use multilevel modeling to connect data to each other and to parameters of interest through distributions controlled by parameters drawn from their own distributions. In a Bayesian setting, some distributions are prior distributions reflecting beliefs about parameters formed before obtaining data and are updated in response to data to form posterior distributions (e.g., Gelman et al. 2004). Hierarchical methods have been used in the climate science literature to connect data over varying spatial scales (e.g., Min and Hense 2007), to consider optimal superensemble design and the development of climate forecasts (Berliner and Kim 2008), and to include the possibility of shared structural biases in the course of developing a joint distribution for changes in temperature and precipitation (Tebaldi and Sansó 2009). In the current model, the different levels of the hierarchy represent different types of uncertainty affecting estimation of feedback factors.

I define a “study” of a feedback factor to be a single climate model or empirical analysis. Each study can report more than one value (observation) for the feedback factor, whether because of multiple ways of calculating the feedback factor (e.g., the use of different radiative kernels within a single climate model) or because of data-driven uncertainty (e.g., standard errors in empirical estimation). Each study’s observations may cluster around a feedback value that is offset from the true value as a result of the study’s biases, and each study may share more biases with some studies and less with others. We can therefore imagine a hierarchy of groups, with group membership determining how closely related the studies may be. One convenient framework divides studies into two groups: climate models and empirical studies that use climatic records. We might expect empirical studies of feedbacks to share biases if they come from time periods with different conditions than the present (e.g., more or less extensive land ice sheets), if they share measurement or dating errors, and if past climatic variability does not perfectly approximate the changes produced by anthropogenic GHG forcing. Climate models, in turn, also have several possible sources of shared biases (Tebaldi and Knutti 2007): common resolution; common parameterizations or unresolved processes; shared observations used to tune the models; shared grids and numerical methods; shared components; and shared creators. Jun et al. (2008) provided evidence from late twentieth-century temperature simulations that models do have shared biases and even that models created by the same institution give more highly correlated output (see also Knutti et al. 2009). Each study therefore has uncertainty about the estimate it produces and about how its estimation procedure tends to produce results that differ from the values that will explain future climate change.

*f*, out of

_{k}*K*feedbacks. Let

*f*represent all unknown and unmodeled feedbacks, so we have observations for only those feedbacks

_{K}*f*with

_{k}*k*

*K*− 1]. Define a group

*j*composed of studies

*i*of feedback factor

*k*such that, conditional on the true value of

*f*, the group’s study results

_{k}*M*are exchangeable and so can be treated as if they come from a distribution over which we have prior beliefs (Bernardo and Smith 1994). Exchangeability in this case means that all prior information about a study’s outcomes is given by its group membership. I treat the study results as generated by a process with a normally distributed error term:

_{ijk}*f*by an unknown amount

_{k}*θ*that represents the bias common to all members of group

_{jk}*j*.

^{2},

^{3}Future applications may specify a higher-level distribution for

*θ*if groups may share some structural biases.

_{jk}*M*in a group may report not just one point estimate for the feedback factor but either a distribution of values or a set of values. Here,

_{ijk}*M*can then be interpreted as the study’s actual representation of feedback

_{ijk}*k*, which we only observe with noise. If standard errors

*z̃*are available for each study observation

_{ijk}*z̃*, then assuming that

_{ijk}*z̃*is a normally distributed unbiased estimator of

_{ijk}*M*with df degrees of freedom yields

_{ijk}*t*(

*x*,

*y*,

*z*) is a

*t*distribution with location parameter

*x*, scale parameter

*y*, and shape parameter

*z*. Alternately, as is the case throughout this application, study

*M*may report several values

_{ijk}*y*. These observations may be combined in lower-level groups used, inter alia, to inform the parameters for

_{hijk}*M*. I assume the within-study observations are normally distributed with mean

_{ijk}*M*:

_{ijk}*ϕ*are here shared among studies in a group because the current application’s intrastudy variation comes from combining the same three radiative kernels with a given climate model’s output. In other applications, the within-study standard deviation could be specific to each study

_{jk}*i*or could be drawn from a group-level distribution.

This hierarchical approach clearly separates several sources of error or bias: 1) the structural bias common across groups is the expected value of any higher-level distribution that may exist for *θ _{jk}*; 2) the structural bias specific to group

*j*is the difference between

*θ*and the structural bias common across groups or, if

_{jk}*θ*lacks a higher-level distribution, is

_{jk}*θ*; 3) the between-study variation within a group is determined by

_{jk}*σ*; 4) the uncertainty within a particular study’s observations due to limited data, sampling variability, or different ways of obtaining data is determined by

_{jk}*z̃*or

_{ijk}*ϕ*; and 5) the possibility of omitted feedbacks is represented by a feedback term

_{jk}*f*lacking observations.

_{K}Applying Bayes’ theorem updates prior distributions in response to observations of feedback factors to produce posterior distributions for all parameters. Table 1 describes the six sets of priors used for the standard deviations of within-study observations (*ϕ _{jk}*), the between-study standard deviations (

*σ*), the estimated feedback factors (

_{jk}*f*), the unknown feedback factor (

_{k}*f*), and the biases shared between a group’s studies (

_{K}*θ*). Each prior distribution is independent of all others. Figure 3 plots each type of prior distribution used. Care must be taken in prior selection (e.g., Frame et al. 2005), as flat (uniform) priors on one type of parameter can contribute more information to another parameter than intended (Van Dongen 2006). Except where varied to assess sensitivity, we use weakly informative priors that concentrate prior probability mass in the range of the a priori most plausible values (i.e., values closer to 0) while still placing significant probability on more extreme values. These weakly informative priors aim to let even sparse data dominate the posterior distributions without ruling out extreme values. The use of weakly informative priors can be important because with small numbers of groups and, possibly, of observations within a group, less informative priors may not be dominated by the data and may produce results that are highly sensitive to the form of the prior (Kass and Wasserman 1996; Lambert et al. 2005).

_{jk}^{4},

^{5}The specific numerical values for the priors are meant to give reasonable-looking distributions, and the six sets of priors will help assess the sensitivity of the posterior distributions to the form of the prior.

Annan and Hargreaves (2009) argued that previous analyses’ use of a uniform prior for climate sensitivity made their results sensitive to the prior’s upper bound and generally led to overly pessimistic conclusions about the possibility of extreme temperature change. The prior distributions on *f _{k}* and

*f*imply prior distributions for climate sensitivity that are far from uniform and that concentrate prior probability in the Intergovernmental Panel on Climate Change (IPCC)’s likely range; yet we will see that these prior distributions nonetheless can generate posterior distributions with significant probability of extreme temperature change.

_{K}^{6}

The two most significant omissions in the proposed statistical model are constraints on the total system response and the possibility of abrupt changes and threshold effects. First, observations of, for instance, temperature change and ocean heat capacity may be able to constrain *F* directly even though these do not provide information about any particular feedback process. These observations may therefore be able to constrain the positive tail of the distribution for Δ*T*, but they will likely have their own structural biases and are unlikely to account for the operation of slower feedbacks (Urban and Keller 2009). Second, concern about climate change may be driven not just by concern about its ultimate magnitude but also by concern about its possibly being both abrupt and irreversible (Keller et al. 2008; Lenton et al. 2008; Solomon et al. 2009). Both abruptness and irreversibility should be included in a complete risk assessment and should inform near-term abatement decisions.

The posterior distributions were sampled using Markov chain Monte Carlo methods as implemented in WinBUGS version 1.4.3 (Lunn et al. 2000; Gamerman and Lopes 2006). Each posterior distribution generated one million samples after a burn-in period of one million samples. The sample size was large enough for multiple chains to converge on the posterior distributions.

## 5. Data: Models’ estimates of feedbacks

This paper considers four climate change feedbacks for which model estimates are available: albedo, clouds, water vapor lapse rate, and carbon cycle. It also considers the sum of all other feedbacks, including unmodeled and unknown feedbacks. The water vapor and lapse-rate feedbacks can be treated as a combined water vapor–lapse rate feedback because of their strong negative correlation noted by, among others, Bony et al. (2006), Soden and Held (2006), and Soden et al. (2008). All calculations in this paper use *λ*_{0} = 0.315 K (W m^{−2})^{−1} because Soden et al. (2008) found that *λ*_{0} ranges roughly from 0.31 to 0.32 K (W m^{−2})^{−1}.

Methods for calculating feedback factors from climate models often rely on representations like Eq. (5). Soden et al. (2008) elaborated a method that can enable consistent comparison between models while avoiding biases caused by correlation between climate fields. Their method decomposes feedbacks into the mean change in the associated climate field as the climate is perturbed and a radiative kernel that gives the change in radiative forcing due to a change in the climate field. The kernel is independent of the general circulation model for which the feedback is calculated and depends only on the control climate. Soden et al. (2008) used three different radiative kernels to estimate albedo, cloud, and water vapor–lapse rate feedback factors in each of the climate models from the Intergovernmental Panel on Climate Change’s Fourth Assessment Report, which provided the necessary data. They updated and improved the earlier analyses of Colman (2003), Winton (2006), and Soden and Held (2006). Figure 4 and Table 2 show the data from Soden et al. (2008) using their model acronyms and converted to the nondimensional feedback form. As previously reported (e.g., Bony et al. 2006), variance in cloud feedback estimates is primarily responsible for the variance in estimates of the aggregate feedback factor.

Ideally, observations based on climatic records could supplement these observations from climate models so as to obtain a group of observations that does not share the structural biases common across models. However, several hurdles deter inclusion of empirical estimates in this paper. First, some of the empirical observations have been implicitly accounted for in model development. Second, it is difficult to compute the partial derivatives in Eq. (5) in a way that ensures that only one variable is changing (Bony et al. 2006), climatic variability can create bias in estimations (Spencer and Braswell 2008), and results may need to be adjusted to be relevant to the present and future (Yoshimori et al. 2009). In one illustration of the potential complications, Forster and Collins (2004) and Dessler et al. (2008) attempted to estimate the water vapor feedback from responses to volcanic and El Niño forcings; however, these forcings may not adequately mimic long-term climate change (Bony et al. 2006; Dessler et al. 2008), and the data and methods may not match the time scales of relevant processes (see Hallegatte et al. 2006). Carefully sorting and improving the empirical literature could contribute to extending the present paper’s results.

Carbon cycle feedbacks include processes by which temperature changes alter atmospheric CO_{2} concentrations, which in turn affect radiative forcing and so temperature. A common definition of carbon cycle feedbacks includes warming-induced changes in CO_{2} sources or sinks but does not include changes in CO_{2} sources or sinks due directly to changing CO_{2} concentrations (e.g., Friedlingstein et al. 2006).^{7} This definition implies that the aggregate feedback factor *F* applies to the radiative forcing resulting from the CO_{2} concentrations obtained after combining a CO_{2} emission profile with an offline model of how the carbon cycle responds to increased CO_{2} levels, holding climate constant. In other words, the aggregate feedback factor applies to a CO_{2} concentration already adjusted for CO_{2} fertilization and (nonbiologically) changing ocean sinks. Vegetative and oceanic processes can be net sinks even if the carbon cycle feedback is positive because positive feedback here just means that the strength of these sinks decreases with warming.

Data for carbon cycle feedbacks come from the 11 models of the Coupled Climate–Carbon Cycle Model Intercomparison Project (C^{4}MIP), as reported by Friedlingstein et al. (2006) and as adjusted by Cadule et al. (2009) for the nonlinearity of radiative forcing as a function of CO_{2} levels.^{8} However, as shown in the appendix, both analyses implicitly included uncoupled models’ feedbacks in their reference systems. With regard to the present purposes, they therefore overestimated carbon cycle feedbacks by including the operation of albedo, water vapor, lapse rate, and cloud feedbacks in response to carbon cycle feedbacks’ effects on CO_{2} concentrations. To then obtain an estimate of the aggregate feedback factor *F* by summing their estimates of the carbon cycle feedback with estimates of these other feedbacks would double count the other feedbacks. The appendix explains how to adapt these carbon cycle feedback estimates, and Fig. 4 and Table 2 show their adjusted values.

Models’ estimates of carbon cycle feedback factors are notably incomplete because most only represented changes in photosynthesis, growth, and decomposition. Possibly important processes largely omitted by the models considered in Friedlingstein et al. (2006) include permafrost melting, fires, tropical deforestation, and nutrient limitation (Field et al. 2007; Schuur et al. 2008; Sokolov et al. 2008; Schuur et al. 2009; Tarnocai et al. 2009),^{9} though more are including dynamic vegetation that allows for phenomena such as biome shifts and fires (Field et al. 2007). Further, the models do not fully explore the parameter space for the processes they do include (Matthews et al. 2007), and their parameterizations probably share biases. Most models report positive net carbon cycle feedbacks because of “the stimulated net *C* release from land ecosystems in response to climate warming” (Luo 2007, p. 687). However, although this carbon release is primarily driven by models’ similar representations of the sensitivity of photosynthesis and respiration to changing temperatures, there is still much uncertainty about the direction and degree of these responses (Luo 2007).

The difficulty of modeling the carbon cycle makes estimation from climatic records especially desirable, but it is also difficult to empirically estimate carbon cycle feedbacks from past observations because CO_{2} levels affect temperature even as temperature affects CO_{2} levels. This results in a system of simultaneous equations, with the noise term in one equation correlated with the other equation’s dependent variable and so with its own regressor. Because of this correlation between regressor and noise, ordinary least squares regression produces inconsistent estimates. Studies by Scheffer et al. (2006) and Torn and Harte (2006) assumed that there were no significant nontemperature drivers of CO_{2} concentrations in the last millennium’s little ice age or in a 250 000-yr portion of the Vostok ice core record, which would avoid asymptotic bias by eliminating the noise term in the regression of CO_{2} on temperature. However, because even small biases in feedback estimates can have special significance due to the nonlinearity of temperature change in the aggregate feedback factor *F*, this assumption requires further validation. These studies also did not report standard errors from their regressions, making it difficult to place their results in a probabilistic analysis. Properly calibrating this uncertainty is important because, as shown below, having a group of studies not sharing models’ biases can greatly affect the posterior distributions for feedback strength and temperature.

One strength of the hierarchical Bayes framework is its ability to assess the importance of beliefs about unknown and unmodeled feedbacks. Climate models do not include all feedbacks. Known missing feedbacks include changes in non-CO_{2} GHGs; in subsea methane hydrates; in ocean circulation and biota; in ocean sinks via changing wind regimes; in albedo and other biophysical feedbacks due to ecosystem responses; and, for transient climate change, in sea surface temperatures due to hydrological cycle intensification (Gruber et al. 2004; Archer 2007; Field et al. 2007; Le Quéré et al. 2007; Williams et al. 2007; Lawrence et al. 2008; Cadule et al. 2009; Dorrepaal et al. 2009; Zeebe et al. 2009). Furthermore, slow feedbacks such as land ice sheet dynamics that become significant after decades or centuries of forcing should be considered in discussions of GHG stabilization targets (Hansen et al. 2008). The prior on *f _{K}* should represent uncertainty about the aggregate strength of omitted feedbacks.

## 6. Results: Posterior distributions

Posterior distributions for the albedo, carbon cycle, cloud, and water vapor–lapse rate feedbacks result from using the data given in Table 2 and Fig. 4 to update each of the six combinations of prior distributions from Table 1 within the hierarchical Bayes framework shown in Fig. 2. These posterior distributions combine with the posterior distribution for the unknown and unmodeled feedbacks *f _{K}* (which is the same as its prior distribution in Table 1 due to the absence of data) to produce posterior distributions for the aggregate feedback factor

*F*and for the temperature change Δ

*T*

_{2×CO2}resulting from doubling CO

_{2}concentrations (where the CO

_{2}doubling does not account for carbon cycle feedbacks).

^{10}

Figures 5 and 6 show the posterior distributions resulting from each set of priors. Figure 7 gives the same results in box plot form. Prior combinations 5 and 6 vary the prior on *f _{k}* when there is no possibility of shared structural bias, showing that the posterior for each feedback factor

*f*is not sensitive to the form of the prior on

_{k}*f*in the certain absence of shared structural biases. However, comparing prior combinations 1 and 2 shows that the prior for

_{k}*f*can affect the posterior when there is a nonzero probability of shared structural bias. Prior combinations 1 and 3 only vary the prior on

_{k}*f*and the same applies to prior combinations 4 and 5. Comparing the posterior distributions produced within each pair of priors shows how the unknown and unmodeled feedbacks

_{K}*f*spread the probability distribution for temperature change, especially on the high side. Because the prior on

_{K}*f*is never updated, it is directly combined with the posteriors for the four constrained feedbacks to obtain the posterior for

_{K}*F*, and because the possibility of slightly greater values of net positive

*F*spreads the temperature change distribution more than does the possibility of slightly lower values (Roe and Baker 2007; Hannart et al. 2009), the nonzero priors on

*f*thicken the positive tail of the temperature change distribution more than they thicken the negative tail.

_{K}The posterior distributions are sensitive to the possibility that models share biases (cf. prior combinations 3 and 4). Possible shared biases have three effects. First, they increase the spread of the posterior distribution for each *f _{k}*. The possibility of

*θ*< 0 makes relatively high values of

_{jk}*f*more probable, and the possibility of

_{k}*θ*> 0 makes relatively low values of

_{jk}*f*more probable. Second, the possibility of shared biases limits the benefit of additional models because, with only one group of observations per feedback factor, the available observations can only constrain the sum of the feedback factor and the shared bias term (

_{k}*f*+

_{k}*θ*). Even with an unbounded number of models, we could never be sure what portion of their signal is related to the true feedback and what portion is related to shared biases.

_{jk}Third, the possibility of shared biases can shift the posterior median for *f _{k}* toward 0 when the prior distribution for

*f*has a peak at 0. The posterior distributions for

_{k}*f*and

_{k}*θ*depend on their prior distributions and on the posterior for

_{jk}*f*+

_{k}*θ*(Fig. 8). When both priors are

_{jk}*t*distributions with peaks at 0 (as in prior combinations 1 and 3), both posteriors move toward the posterior for

*f*+

_{k}*θ*; however, neither parameter’s posterior is identical to the posterior for

_{jk}*f*+

_{k}*θ*because values with greater prior probability for one parameter (e.g.,

_{jk}*f*closer to 0) often imply values with lower prior probability for the other (e.g., implying

_{k}*θ*farther from 0). For net positive

_{jk}*f*+

_{k}*θ*, the posterior for

_{jk}*θ*will peak at a positive value and the posterior for

_{jk}*f*will peak at a value less than the peak of

_{k}*f*+

_{k}*θ*. In contrast, when the prior on

_{jk}*f*is a uniform distribution (as in prior combination 2), the posterior on

_{k}*θ*is approximately the same as its prior because (ignoring effects from the boundary of the uniform distribution) the posterior on

_{jk}*f*can take any form without sacrificing prior knowledge. Each value for

_{k}*f*has equal prior probability over the range of the uniform distribution, which allows the posterior for

_{k}*θ*to be completely determined by its prior knowledge. To keep the posterior for

_{jk}*θ*in its region of greatest prior probability, less peaked priors on

_{jk}*f*therefore tend to center the posterior for

_{k}*f*on the posterior for

_{k}*f*+

_{k}*θ*.

_{jk}From this point forward, prior combination 3 is the base case because it includes the possibility of shared structural biases, includes the possibility of unmodeled and unknown feedbacks, and uses a prior on the constrained feedbacks that makes small values the most likely while allowing the possibility of extreme outcomes. Without clear grounds for favoring either prior 1’s or prior 3’s representation of omitted feedbacks, I use prior 3 only to show that the following results are not driven by a highly diffuse prior on omitted feedbacks. Prior combination 5 provides an important, if unrealistic, reference case because it represents maximal confidence in climate models’ completeness and independence. Figure 9 compares the posterior distributions resulting from these two prior combinations with the distributions for climate change produced by Annan and Hargreaves (2006) and Hegerl et al. (2006) using multiple constraints from the instrumental and paleoclimatic records. Prior combination 5 generates the most peaked posterior and the posterior with the peak at the greatest temperature change. The other distributions indicate the greatest risk of extreme temperature change: relative to the results from prior combination 5, they shift probability mass from moderate temperature change to extreme temperature change. Prior combination 3 carries this trend the farthest (though still not as far as prior combinations 1 and 2), placing the most probability mass both on low temperature change and on high temperature change because the possibility of shared biases makes extreme values of each *f _{k}* more plausible.

An alternate means of generating probability distributions would identify each model’s output with a single point estimate that is the mean of the three radiative kernel results and then treat the models as independent, identically distributed samples from a normally distributed population (similar to Roe and Baker 2007). This method produces results nearly identical to those with prior combination 5, implying that the weakly informative prior distributions do not have a noticeable effect in the certain absence of shared structural biases. Importantly, therefore, the assumption of normally distributed feedbacks is not sufficient to generate distributions for temperature change with the long positive tail we have come to expect from past studies, partly because constraining uncertainty about feedbacks has a greater effect on temperature change distributions than Roe and Baker claimed (Hannart et al. 2009). If developing a distribution from knowledge of feedbacks, whether or not the distribution has the customary shape in the positive tail in fact depends strongly on whether the model includes the possibility of nonzero shared structural bias.

## 7. Discussion

### a. Opportunities for learning

This statistical framework can readily incorporate new information to produce updated posterior distributions. Ignoring the possibility of modeling new feedback processes, we can learn in at least four ways:

(i) by obtaining additional model observations (obtaining

*y*for new_{hijk}*i*);(ii) by obtaining observations that do not share the current observations’ structural biases (adding a group

*j*with its own*M*);_{ijk}(iii) by decreasing the variance of the prior distribution for shared structural biases (constraining the prior for

*θ*); and_{jk}(iv) by decreasing the variance of the prior distribution for unknown and unmodeled feedbacks (constraining the prior for

*f*)._{K}

*M*for each

_{ijk}*f*, where each new study contains three observations equal to the mean of the set of real observations. By reinforcing the actual observations’ central tendency, these additional observations should constrain the posterior distribution at least as well as any other set with the same number of additional observations. I represent adding a group

_{k}*j*by randomly assigning the existing models to one of two groups that are assumed not to share any structural biases. Ten models end up in a first group and four end up in a second, with each group including three models with carbon cycle feedback observations. The assumption of independence conditional on

*f*and the similarity of the two groups’ estimates together imply that the additional group should constrain the posterior distributions as much as possible given the group assignments. I represent decreasing uncertainty about shared biases

_{k}*θ*and about unmodeled and unknown feedbacks

_{jk}*f*by replacing their priors with a

_{K}*t*distribution having scale parameter 0.01, which is smaller than the scale parameter of 0.05 used for each in prior combination 3 (see Fig. 3).

Remarkably, the additional observations do not perceptibly affect the posterior probability distribution. Obtaining additional observations of feedback factors from climate models related to existing ones only affects the posterior distributions if the new estimates differ from the current ones’ central tendency. The present uncertainty is not driven by a lack of models. These results cohere with the findings in Figs. 5 and 6 that the prior for *f _{k}* does not greatly influence the posterior distributions in the absence of possible shared biases.

Constraining unmodeled and unknown feedbacks does have a noticeable impact, but the impacts of obtaining an additional group and of constraining shared biases are more important. Having an additional, conditionally independent group enables each *f _{k}* to be constrained via two different

*f*+

_{k}*θ*terms, and therefore it can have a similar, though less stark, effect to that of the simulated narrowing of prior beliefs about shared biases

_{jk}*θ*. The additional group constrains the posterior distribution for the aggregate feedback factor

_{jk}*F*even though each group contains fewer observations than did the original group.

^{11}The simulated narrower prior beliefs about shared biases result in the narrowest probability distribution for the aggregate feedback factor

*F*. Aside from explicitly modeling some of the currently unmodeled feedbacks (in which case the results would depend on how the newly modeled feedbacks affect beliefs about remaining unmodeled and unknown feedbacks), the activities that could most constrain the posterior probability distributions for the aggregate feedback factor and for temperature change are those that more tightly constrain prior beliefs about shared structural biases or those that produce observations of feedback factors through methods that are unlikely to share much structural bias with existing climate models.

### b. Measures of temperature risk

By focusing on factors that drive uncertainty about low-probability temperature change, the posterior distributions enable assessments of temperature change risks.^{12} Table 3 gives percentiles and conditional expectations for the aggregate feedback factor *F*, and Fig. 11 plots percentile temperature change against CO_{2} concentrations.^{13} The percentile temperature change resulting from prior combination 3 is similar to those of Annan and Hargreaves (2006) and Hegerl et al. (2006) up to about the 95th percentile, at which point the value resulting from prior combination 3 increases faster because its posterior distribution has more weight in its extreme positive tail. Using prior combination 1 would amplify this result by placing even more weight in the posterior tails. If we believe the models cluster around the true outcomes and include all relevant processes (as in prior combination 5), then we have a distribution for climate sensitivity that places overwhelming probability on its being close to 3°C and almost no probability on its being greater than 4°C. However, including these other sources of uncertainty (as in prior combination 3) dramatically expands the positive tail so that there is more than a 5% chance that climate sensitivity is greater than 5°C. The expected climate sensitivity conditional on being above the 95th percentile is 18°C, and the expected climate sensitivity conditional on being above the 75th percentile is still 7°C. The temperature risk curves implied by Annan and Hargreaves (2006) and Hegerl et al. (2006) generally fall between the curves implied by prior combinations 3 and 5.

The CO_{2} concentration needed to meet a 2°C target relative to preindustrial levels depends strongly on risk tolerance and on prior beliefs about shared model biases and about model completeness.^{14} If models are unrealistically assumed to be complete and to lack shared biases (prior combination 5), then CO_{2} concentrations could stabilize at 410 ppm (slightly above present levels) and still have less than a 5% chance of exceeding the 2°C target. If, on the other hand, models are believed to possibly have shared biases and omissions (prior combination 3), then, before accounting for the effects of non-CO_{2} GHGs or of aerosols, even stabilizing CO_{2} concentrations at current levels leaves a 10% chance of exceeding the 2°C target. Policymakers’ 2°C target may therefore support significant near-term abatement and eventual net negative emissions (D. Lemoine et al. 2010, unpublished manuscript).

## 8. Conclusions

This paper elaborates an extensible method for combining studies of feedback factors to produce posterior distributions for aggregate feedbacks and temperature change in response to stabilized radiative forcing. Assumptions about shared model omissions and shared model biases are crucial for the positive tails of temperature change distributions and for the sensitivity of CO_{2} concentration targets to temperature risk tolerance. Beyond obtaining observations with uncorrelated structural biases for use in constraining posterior distributions, further work could adopt more complex representations of model dependencies (Jun et al. 2008), could include total system constraints (Urban and Keller 2009), could explore alternate types of prior beliefs, and could refine prior beliefs about shared structural biases and about unknown and unmodeled feedbacks. Further work could also develop temperature change distributions for planned emission pathways by including uncertainty about the operation of CO_{2} sinks in response to changing CO_{2} concentrations and uncertainty in monitoring negotiated emission allocations. A robust Bayesian approach may help to address the difficulty in choosing the “right” prior distribution for feedbacks and shared biases (e.g., Borsuk and Tomassini 2005; Tomassini et al. 2007), and this paper’s use of multiple priors could complement an ambiguity aversion framework for decision making (e.g., Klibanoff et al. 2005, 2009). Finally, to make them more useful for adaptation work and impacts assessments, these probability distributions should be extended to consider transient climate change by including uncertainty about heat uptake by oceans, uncertainty about emission paths, and uncertainty about the time scales over which feedbacks operate (Hall 2007; Knutti et al. 2008; Baker and Roe 2009; Roe 2009).

## Acknowledgments

Daniel Kammen, Inez Fung, John Harte, and Margaret Torn offered helpful discussions, and three anonymous referees provided insightful comments. James Annan, Gabriele Hegerl, and Brian Soden provided data. This research was partly carried out at the International Institute for Applied Systems Analysis (IIASA) under the supervision of Michael Obersteiner, Sabine Fuss, and Jana Szolgayova as part of the 2009 Young Scientists Summer Program. Participation in the IIASA Young Scientists Summer Program was made possible by a grant from the National Academy of Sciences Board on International Scientific Organizations, funded by the National Science Foundation under Grant OISE-0738129. Support also came from the Robert and Patricia Switzer Foundation Environmental Fellowship Program.

## REFERENCES

Ackerman, F., S. DeCanio, R. Howarth, and K. Sheeran, 2009: Limitations of integrated assessment models of climate change.

,*Climatic Change***95****,**297–315.Allen, M. R., D. J. Frame, C. Huntingford, C. D. Jones, J. A. Lowe, M. Meinshausen, and N. Meinshausen, 2009: Warming caused by cumulative carbon emissions towards the trillionth tonne.

,*Nature***458****,**1163–1166.Annan, J. D., and J. C. Hargreaves, 2006: Using multiple observationally-based constraints to estimate climate sensitivity.

,*Geophys. Res. Lett.***33****,**L06704. doi:10.1029/2005GL025259.Annan, J. D., and J. C. Hargreaves, 2009: On the generation and interpretation of probabilistic estimates of climate sensitivity.

, in press, doi:10.1007/s10584-009-9715-y.*Climatic Change*Archer, D., 2007: Methane hydrate stability and anthropogenic climate change.

,*Biogeosciences***4****,**521–544.Artzner, P., F. Delbaen, J. Eber, and D. Heath, 1999: Coherent measures of risk.

,*Math. Finance***9****,**203–228.Baker, M. B., and G. H. Roe, 2009: The shape of things to come: Why is climate change so predictable?

,*J. Climate***22****,**4574–4589.Berliner, L. M., and Y. Kim, 2008: Bayesian design and analysis for superensemble-based climate forecasting.

,*J. Climate***21****,**1891–1910.Bernardo, J. M., and A. F. M. Smith, 1994:

*Bayesian Theory*. Wiley, 586 pp.Bony, S., and Coauthors, 2006: How well do we understand and evaluate climate change feedback processes?

,*J. Climate***19****,**3445–3482.Borsuk, M., and L. Tomassini, 2005: Uncertainty, imprecision, and the precautionary principle in climate change assessment.

,*Water Sci. Technol.***52****,**213–225.Boykoff, M. T., D. Frame, and S. Randalls, 2010: Discursive stability meets climate instability: A critical exploration of the concept of ‘climate stabilization’ in contemporary climate policy.

,*Glob. Environ. Change***20****,**53–64.Cadule, P., L. Bopp, and P. Friedlingstein, 2009: A revised estimate of the processes contributing to global warming due to climate-carbon feedback.

,*Geophys. Res. Lett.***36****,**L14705. doi:10.1029/2009GL038681.Colman, R., 2003: A comparison of climate feedbacks in general circulation models.

,*Climate Dyn.***20****,**865–873.Colman, R., and B. McAvaney, 2009: Climate feedbacks under a very broad range of forcing.

,*Geophys. Res. Lett.***36****,**L01702. doi:10.1029/2008GL036268.Colman, R., S. B. Power, and B. J. McAvaney, 1997: Non-linear climate feedback analysis in an atmospheric general circulation model.

,*Climate Dyn.***13****,**717–731.Copenhagen Accord, 2009:

*Decision -/CP.15*. 18 December 2009, Copenhagen, Denmark, 6 pp. [Available online at http://unfccc.int/files/meetings/cop_15/application/pdf/cop15_cph_auv.pdf].Dessler, A. E., Z. Zhang, and P. Yang, 2008: Water-vapor climate feedback inferred from climate fluctuations, 2003–2008.

,*Geophys. Res. Lett.***35****,**L20704. doi:10.1029/2008GL035333.Dorrepaal, E., S. Toet, R. S. P. van Logtestijn, E. Swart, M. J. van de Weg, T. V. Callaghan, and R. Aerts, 2009: Carbon respiration from subsurface peat accelerated by climate warming in the subarctic.

,*Nature***460****,**616–619.Field, C. B., D. B. Lobell, H. A. Peters, and N. R. Chiariello, 2007: Feedbacks of terrestrial ecosystems to climate change.

,*Annu. Rev. Environ. Resour.***32****,**1–29.Forster, P. Mde F., and M. Collins, 2004: Quantifying the water vapour feedback associated with post-Pinatubo global cooling.

,*Climate Dyn.***23****,**207–214.Forster, P. Mde F., and Coauthors, 2007: Changes in atmospheric constituents and in radiative forcing.

*Climate Change 2007: The Physical Science Basis,*S. Solomon et al., Eds., Cambridge University Press, 129–234.Frame, D. J., B. B. B. Booth, J. A. Kettleborough, D. A. Stainforth, J. M. Gregory, M. Collins, and M. R. Allen, 2005: Constraining climate forecasts: The role of prior assumptions.

,*Geophys. Res. Lett.***32****,**L09702. doi:10.1029/2004GL022241.Friedlingstein, P., J. Dufresne, P. M. Cox, and P. Rayner, 2003: How positive is the feedback between climate change and the carbon cycle?

,*Tellus***55B****,**692–700.Friedlingstein, P., and Coauthors, 2006: Climate–carbon cycle feedback analysis: Results from the C4MIP model intercomparison.

,*J. Climate***19****,**3337–3353.Gamerman, D., and H. F. Lopes, 2006:

*Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference*. 2nd ed. Chapman & Hall/CRC, 323 pp.Gelman, A., 2006: Prior distributions for variance parameters in hierarchical models (Comment on article by Browne and Draper).

,*Bayesian Anal.***1****,**515–534.Gelman, A., J. B. Carlin, H. S. Stern, and D. B. Rubin, 2004:

*Bayesian Data Analysis*. 2nd ed. Chapman & Hall/CRC, 668 pp.Gregory, J. M., C. D. Jones, P. Cadule, and P. Friedlingstein, 2009: Quantifying carbon cycle feedbacks.

,*J. Climate***22****,**5232–5250.Gruber, N., and Coauthors, 2004: The vulnerability of the carbon cycle in the 21st century: An assessment of carbon-climate-human interactions.

*The Global Carbon Cycle: Integrating Humans, Climate and the Natural World,*C. Field and M. Raupach, Eds., Island Press, 45–76.Hall, J., 2007: Probabilistic climate scenarios may misrepresent uncertainty and lead to bad adaptation decisions.

,*Hydrol. Processes***21****,**1127–1129.Hallegatte, S., A. Lahellec, and J. Grandpeix, 2006: An elicitation of the dynamic nature of water vapor feedback in climate change using a 1D model.

,*J. Atmos. Sci.***63****,**1878–1894.Hannart, A., J. Dufresne, and P. Naveau, 2009: Why climate sensitivity may not be so unpredictable.

,*Geophys. Res. Lett.***36****,**L16707. doi:10.1029/2009GL039640.Hansen, J., and Coauthors, 2008: Target atmospheric CO2: Where should humanity aim?

,*Open Atmos. Sci. J.***2****,**217–231.Hegerl, G. C., T. J. Crowley, W. T. Hyde, and D. J. Frame, 2006: Climate sensitivity constrained by temperature reconstructions over the past seven centuries.

,*Nature***440****,**1029–1032.Jun, M., R. Knutti, and D. W. Nychka, 2008: Spatial analysis to quantify numerical model bias and dependence.

,*J. Amer. Stat. Assoc.***103****,**934–947.Kass, R. E., and L. Wasserman, 1996: The selection of prior distributions by formal rules.

,*J. Amer. Stat. Assoc.***91****,**1343–1370.Keller, K., G. Yohe, and M. Schlesinger, 2008: Managing the risks of climate thresholds: uncertainties and information needs.

,*Climatic Change***91****,**5–10.Klibanoff, P., M. Marinacci, and S. Mukerji, 2005: A smooth model of decision making under ambiguity.

,*Econometrica***73****,**1849–1892.Klibanoff, P., M. Marinacci, and S. Mukerji, 2009: Recursive smooth ambiguity preferences.

,*J. Econ. Theory***144****,**930–976.Knutti, R., 2008: Should we believe model predictions of future climate change?

,*Philos. Trans. Roy. Soc.***B366****,**4647–4664.Knutti, R., and G. C. Hegerl, 2008: The equilibrium sensitivity of the earth’s temperature to radiation changes.

,*Nat. Geosci.***1****,**735–743.Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2009: Challenges in combining projections from multiple climate models.

,*J. Climate***23****,**2739–2758.Knutti, R., and Coauthors, 2008: A review of uncertainties in global temperature projections over the twenty-first century.

,*J. Climate***21****,**2651–2663.Lambert, P. C., A. J. Sutton, P. R. Burton, K. R. Abrams, and D. R. Jones, 2005: How vague is vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS.

,*Stat. Med.***24****,**2401–2428.Lawrence, D. M., A. G. Slater, R. A. Tomas, M. M. Holland, and C. Deser, 2008: Accelerated arctic land warming and permafrost degradation during rapid sea ice loss.

,*Geophys. Res. Lett.***35****,**L11506. doi:10.1029/2008GL033985.Lenton, T. M., H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf, and H. J. Schellnhuber, 2008: Tipping elements in the earth’s climate system.

,*Proc. Natl. Acad. Sci. USA***105****,**1786–1793.Le Quéré, C., and Coauthors, 2007: Saturation of the Southern Ocean CO2 sink due to recent climate change.

,*Science***316****,**1735–1738.Lunn, D. J., A. Thomas, N. Best, and D. Spiegelhalter, 2000: WinBUGS—A Bayesian modelling framework: Concepts, structure, and extensibility.

,*Stat. Comput.***10****,**325–337.Luo, Y., 2007: Terrestrial carbon-cycle feedback to climate warming.

,*Annu. Rev. Ecol. Evol. Syst.***38****,**683–712.Major Economies Forum, 2009:

*Declaration of the Leaders: The Major Economies Forum on Energy and Climate*. 9 July 2009, L’Aquila, Italy, 4 pp. [Available online at http://www.g8italia2009.it/static/G8_Allegato/MEF_Declarationl.pdf].Matthews, H. D., M. Eby, T. Ewen, P. Friedlingstein, and B. J. Hawkins, 2007: What determines the magnitude of carbon cycle-climate feedbacks?

,*Global Biogeochem. Cycles***21****,**GB2012. doi:10.1029/2006GB002733.Matthews, H. D., N. P. Gillett, P. A. Stott, and K. Zickfeld, 2009: The proportionality of global warming to cumulative carbon emissions.

,*Nature***459****,**829–832.Min, S., and A. Hense, 2007: Hierarchical evaluation of IPCC AR4 coupled climate models with systematic consideration of model uncertainties.

,*Climate Dyn.***29****,**853–868.Newbold, S. C., and A. Daigneault, 2009: Climate response uncertainty and the benefits of greenhouse gas emissions reductions.

,*Environ. Resour. Econ.***44****,**351–377.Rockafellar, R. T., and S. Uryasev, 2000: Optimization of conditional value-at-risk.

,*J. Risk***2****,**21–41.Roe, G. H., 2009: Feedbacks, timescales, and seeing red.

,*Annu. Rev. Earth Planet. Sci.***37****,**93–115.Roe, G. H., and M. B. Baker, 2007: Why is climate sensitivity so unpredictable?

,*Science***318****,**629–632.Scheffer, M., V. Brovkin, and P. M. Cox, 2006: Positive feedback between global warming and atmospheric CO2 concentration inferred from past climate change.

,*Geophys. Res. Lett.***33****,**L10702. doi:10.1029/2005GL025044.Schuur, E. A. G., and Coauthors, 2008: Vulnerability of permafrost carbon to climate change: Implications for the global carbon cycle.

,*Bioscience***58****,**701–714.Schuur, E. A. G., J. G. Vogel, K. G. Crummer, H. Lee, J. O. Sickman, and T. E. Osterkamp, 2009: The effect of permafrost thaw on old carbon release and net carbon exchange from tundra.

,*Nature***459****,**556–559.Soden, B. J., and I. M. Held, 2006: An assessment of climate feedbacks in coupled ocean–atmosphere models.

,*J. Climate***19****,**3354–3360.Soden, B. J., I. M. Held, R. Colman, K. M. Shell, J. T. Kiehl, and C. A. Shields, 2008: Quantifying climate feedbacks using radiative kernels.

,*J. Climate***21****,**3504–3520.Sokolov, A. P., D. W. Kicklighter, J. M. Melillo, B. S. Felzer, C. A. Schlosser, and T. W. Cronin, 2008: Consequences of considering carbon–nitrogen interactions on the feedbacks between climate and the terrestrial carbon cycle.

,*J. Climate***21****,**3776–3796.Solomon, S., G. Plattner, R. Knutti, and P. Friedlingstein, 2009: Irreversible climate change due to carbon dioxide emissions.

,*Proc. Natl. Acad. Sci. USA***106****,**1704–1709.Spencer, R. W., and W. D. Braswell, 2008: Potential biases in feedback diagnosis from observational data: A simple model demonstration.

,*J. Climate***21****,**5624–5628.Stephens, G. L., 2005: Cloud feedbacks in the climate system: A critical review.

,*J. Climate***18****,**237–273.Tarnocai, C., J. G. Canadell, E. A. G. Schuur, P. Kuhry, G. Mazhitova, and S. Zimov, 2009: Soil organic carbon pools in the northern circumpolar permafrost region.

,*Global Biogeochem. Cycles***23****,**GB2023. doi:10.1029/2008GB003327.Tebaldi, C., and R. Knutti, 2007: The use of the multi-model ensemble in probabilistic climate projections.

,*Philos. Trans. Roy. Soc.***A365****,**2053–2075.Tebaldi, C., and B. Sansó, 2009: Joint projections of temperature and precipitation change from multiple climate models: a hierarchical Bayesian approach.

,*J. Roy. Stat. Soc.***172A****,**83–106.Tomassini, L., P. Reichert, R. Knutti, T. F. Stocker, and M. E. Borsuk, 2007: Robust Bayesian uncertainty analysis of climate system properties using Markov chain Monte Carlo methods.

,*J. Climate***20****,**1239–1254.Torn, M. S., and J. Harte, 2006: Missing feedbacks, asymmetric uncertainties, and the underestimation of future warming.

,*Geophys. Res. Lett.***33****,**L10703. doi:10.1029/2005GL025540.Urban, N. M., and K. Keller, 2009: Complementary observational constraints on climate sensitivity.

,*Geophys. Res. Lett.***36****,**L04708. doi:10.1029/2008GL036457.Van Dongen, S., 2006: Prior specification in Bayesian statistics: Three cautionary tales.

,*J. Theor. Biol.***242****,**90–100.Weitzman, M. L., 2009: On modeling and interpreting the economics of catastrophic climate change.

,*Rev. Econ. Stat.***91****,**1–19.Williams, P. D., E. Guilyardi, R. Sutton, J. Gregory, and G. Madec, 2007: A new feedback on climate change from the hydrological cycle.

,*Geophys. Res. Lett.***34****,**L08706. doi:10.1029/2007GL029275.Winton, M., 2006: Surface albedo feedback estimates for the AR4 climate models.

,*J. Climate***19****,**359–365.Yoshimori, M., T. Yokohata, and A. Abe-Ouchi, 2009: A comparison of climate feedback strength between CO2 doubling and LGM experiments.

,*J. Climate***22****,**3374–3395.Zeebe, R. E., J. C. Zachos, and G. R. Dickens, 2009: Carbon dioxide forcing alone insufficient to explain Palaeocene–Eocene thermal maximum warming.

,*Nat. Geosci.***2****,**576–580.

## APPENDIX

### Estimating the Carbon Cycle Feedback Factor Using Coupled Climate–Carbon Cycle Models

This appendix explains how to adapt the results of Friedlingstein et al. (2006) and Cadule et al. (2009) for use in the standard feedback model described in Roe (2009) and in section 3. It shows how their estimated feedback factors differ from *f*_{cc}, the feedback of interest, and explains how to combine their reported values with results from Soden et al. (2008) to estimate *f*_{cc}.

*f*

_{cc}be the carbon cycle feedback and

*f*be any of the other

_{k}*K*− 1 feedbacks. Define

*x*is the climate field for feedback process

_{k}*k*. Let Δ

*C*be the change in atmospheric CO

_{u}_{2}levels when the carbon cycle is uncoupled from a climate model (i.e., when the carbon cycle does not respond to changing temperatures), and let Δ

*C*be the change in CO

_{T}_{2}levels resulting from temperature change Δ

*T*. The total change in CO

_{2}levels in a coupled climate–carbon cycle model is therefore Δ

*C*= Δ

_{c}*C*+ Δ

_{u}*C*, where Δ

_{Tc}*T*is the temperature change in a coupled model. Because Δ

_{c}*R*comes from the change in CO

_{f}_{2}levels as determined by an uncoupled carbon cycle model, we have exogenous forcing Δ

*R*=

_{f}*λ*

_{0}

^{−1}

*α*Δ ln

*C*. Then, from Eq. (1):

_{u}*f*

_{cc}, the parameter of ultimate interest, is

*g*, except adjusting the definition of

*η*to account for radiative forcing being proportional to log(CO

_{2}) levels.

*f*

_{cc}using estimates of

*α*labeled herein as

*α*because they implicitly took the operation of other feedbacks in an uncoupled model to be part of the reference system. Friedlingstein et al.(2006) estimated

*T*=

_{c}*C*. Friedlingstein et al. (2003) presented the regression as Δ

_{c}*T*=

_{c}*C*+ Δ

_{c}*T*

_{ind}, where Δ

*T*

_{ind}represents all non-CO

_{2}sources of temperature change and serves as a constant in the regression. These studies estimated carbon cycle feedbacks using regressions that attributed all of the variation in temperature to the variation in coupled model CO

_{2}concentrations. However, as seen in Eq. (A1), changes in CO

_{2}concentrations actually also operate through other feedbacks to change temperature. As long as these other feedbacks are net positive, attributing to carbon cycle feedbacks the variation in CO

_{2}induced by these other feedbacks inflates

_{2}concentrations, so replace Δ ln

*C*in Eq. (A1) with Δ

_{u}*C*and make an analogous replacement in the definition of

_{u}*η*. Using these (incorrect) replacements and substituting for

*η*Δ

*T*= Δ

_{c}*C*, for Δ

_{Tc}*C*= Δ

_{u}*C*− Δ

_{c}*C*, and for Δ

_{Tc}*T*=

_{c}*C*in Eq. (A1) gives

_{c}*α*in terms of

_{2}) concentrations. They bypassed the use of

*α*and instead calculated

_{cc}, their estimate of

*f*

_{cc}, directly from changes in CO

_{2}concentrations in coupled and uncoupled model runs, but

_{cc}≠

*f*

_{cc}due to the same reference system complication encountered by Friedlingstein et al. (2003, 2006). Let the coupled models’ temperature change be given by

*T*− Δ

_{c}*T*from Eq. (A4) and simplifying yields

_{u}_{2}levels. For our purposes, Cadule et al. (2009) thus overestimated the strength of carbon cycle feedbacks when other feedbacks are net positive, and knowing

*f*

_{cc}from their reported

_{cc}.

The data points for *f*_{cc} reported in Table 2 and Fig. 4 come from matching coupled models described in Friedlingstein et al. (2006) with uncoupled models described in Soden et al. (2008) (Table A1). Models match if they use closely related ocean–atmosphere general circulation models (OAGCMs). Some coupled models do not have a close match in the Soden et al. (2008) set of models because, for instance, some coupled models are not tied to an OAGCM but to a simpler model. For coupled models with matches, the sum of the albedo, cloud, and water vapor–lapse rate feedbacks in the uncoupled models enables use of Eq. (A7) to adjust the carbon cycle feedback reported in Cadule et al. (2009).

The six combinations of prior distributions for model parameters (also see Fig. 3). Here, HC(*x*) is a half-Cauchy distribution with scale parameter *x*, *U*(*x*, *y*) is a uniform distribution on (*x*, *y*), and *t*(*x*, *y*, *z*) is a *t* distribution with location parameter *x*, scale parameter *y*, and shape parameter *z*.

The nondimensional feedback factors calculated from Soden et al. (2008) and Cadule et al. (2009), as described in the text and in the appendix. Model and kernel names follow Soden et al. (2008). Figure 4 plots these data.

The posterior percentile values for the aggregate feedback factor [Feedback at Risk (FaR)], and the aggregate feedback factor expected conditional on exceeding the posterior percentile value [Conditional Feedback at Risk (CFaR)].

Table A1. The coupled models from Friedlingstein et al. (2006) and Cadule et al. (2009) with their corresponding uncoupled models from Soden et al. (2008).

^{1}

The definition of climate sensitivity is ambiguous with regard to very fast feedbacks and to slow feedbacks (Knutti and Hegerl 2008).

^{2}

As currently implemented, the statistical framework treats all data sources as equally reliable. Tebaldi and Knutti (2007) and Tebaldi and Sansó (2009) reviewed approaches that have used observational data to weight climate models in an ensemble or to weight combinations of parameters within a single model, often assuming that the optimal weighting does not change from the calibration dataset to the future. Knutti et al. (2009) described some of the conceptual difficulties in determining the optimal weighting.

^{3}

In representing models as clustering around a common offset from *f _{k}*, the proposed methods take advantage of the fact that the most complex models do not sample the range of uncertainty but are calibrated to give their best estimates (Knutti 2008; Knutti et al. 2009).

^{4}

Interestingly, sensitivity to prior beliefs may partially explain the actual diversity of posterior beliefs about climate change.

^{5}

Gelman (2006) explored the choice of priors for between-group variance parameters when group size is small. For cases with at least five groups, he recommended using a uniform reference prior on the standard deviation, and for cases with fewer groups, he recommended the folded noncentral *t* distribution, of which the half-Cauchy used here is a special case.

^{6}

In addition, the implied prior distributions for climate sensitivity do not differ greatly from one prior combination to another.

^{7}

More recent work by Gregory et al. (2009) formally separated carbon cycle feedbacks into concentration–carbon feedbacks and climate–carbon feedbacks.

^{8}

These models provide a transient feedback analysis, not an equilibrium analysis. This is one shared source of bias when used for estimates of the equilibrium feedback factor.

^{9}

Note that the carbon cycle feedback could be defined so that some of these were different feedbacks.

^{10}

Doubling CO_{2} concentrations produces additional radiative forcing Δ*R _{f}* of 3.7 W m

^{−2}(Forster et al. 2007, pp. 140).

^{11}

The effects of a second group depend on how similar its estimates are to those of the first group.

^{12}

We could define Temperature at Risk (TaR) and Conditional Temperature at Risk (CTaR) by analogy with the Value at Risk (VaR) and Conditional Value at Risk (CVaR) metrics used to evaluate investment portfolios or to determine banks’ capital requirements (Artzner et al. 1999; Rockafellar and Uryasev 2000). VaR measures often use the 90th, 95th, and 99th percentiles (Rockafellar and Uryasev 2000).

^{13}

Figure 11 calculates the temperature change for a CO_{2} concentration *C* by using Eq. (2) and the relation Δ*R _{f}* = 5.35 ln(

*C*/

*C*

_{0}), where

*C*

_{0}is the preindustrial concentration (e.g., Cadule et al. 2009).

^{14}

Ultimately, stabilized CO_{2} concentrations may not be the most helpful way of framing GHG goals (e.g., Allen et al. 2009; Matthews et al. 2009; Boykoff et al. 2010), but they do give an important sense of the risks implied by different types of emission pathways.