## 1. Introduction

Our understanding of how sensitive the climate system is to radiative perturbations has been limited by large uncertainties regarding how clouds and other elements of the climate system feed back on surface temperature change (e.g., Webster and Stephens 1984; Cess et al. 1990; Senior and Mitchell 1993; Stephens 2005; Soden and Held 2006; Spencer et al. 2007). Feedbacks are usually estimated by regressing time- and area-averaged top-of-atmosphere (TOA) solar shortwave (SW) or thermally emitted infrared longwave (LW) radiative flux changes against surface temperature (*T*) changes. The regression slope of the resulting relationship provides the diagnosed feedback in W m^{−2} K^{−1}.

A formalism to estimate feedback parameters (*Y*) from nonequilibrium climate states was presented by Forster and Gregory (2006, hereafter FG), whose generalized treatment included both internal and external sources of variability in TOA radiative fluxes. Here we are interested in one of FG’s stated assumptions regarding the possible contamination of diagnosed feedbacks by internal sources of variability (“*X*” terms) in the TOA flux that are not the result of feedback on *T*. Specifically, FG state, “*The X terms are likely to contaminate the result for short datasets, but provided the X terms are uncorrelated to (surface temperature), the regression should give the correct value for Y, if the dataset is long enough.*”

While it is true that the processes that *cause* the *X* terms are, by FG’s definition, uncorrelated to *T*, the *response* of *T* to those forcings cannot be uncorrelated to *T*—for the simple reason that it is radiative forcing that causes changes in *T*. Previous investigators who have estimated feedbacks from observed climate variability have made similar assumptions, whether explicitly stated or not.

Here we address the following question: To what degree could nonfeedback sources of radiative flux variability contaminate feedback estimates? Such flux variations could, for example, be related to low-frequency changes in atmospheric circulation patterns such as the Pacific Decadal Oscillation (PDO; e.g., Mantua et al. 1997) or North Atlantic Oscillation (NAO; e.g., Hurrell 1995) influencing cloud or precipitation efficiency; or high-frequency stochastic variations in cloud cover (e.g., Battisti et al. 1997) and surface heat fluxes (Hasselman 1976). While such flux variations would likely be generated on local and regional scales, for simplicity we will address only their net effect on the entire climate “system” represented by a simple model.

## 2. Model description

*C*

_{p}is the heat capacity of the system (here assumed to be ocean only),

*T*is the temperature, and

*t*is time. The terms on the right-hand side represent heating deviations away from their equilibrium values, with

*F*being the total TOA radiative flux anomaly and

*S*representing heating anomalies not related to TOA flux, for example, heat exchange with the deep ocean such as during El Niño or La Niña events. We have avoided the added complexity of a model atmosphere in the model so that the basic error mechanism can be illustrated.

*F*into three components: a total feedback term −

*αT*dependent on temperature, a known forcing term

*f*(e.g., anthropogenic radiative forcing), and an unknown nonfeedback radiative source term

*N*. Using these terms in (1) gives

*N*, such as those one might expect from stochastic variations in low cloud cover affecting SW, the more general case could involve any nonfeedback source of SW or LW variability, and on any time scale. Such stochastic variability would likely be generated on mesoscales or synoptic scales, but here we are addressing only the net effect of these variations on the total “system” represented by Eq. (2).

Our primary interest is in the estimation of the feedback parameter *α* from time-averaged model output. We first address the simple case where *N* = 0, so that temperature variations are driven only by *S*. (Hereafter, we will assume *f* is either zero or known a priori and removed.) In this case, regression of the TOA flux against temperature amounts to regression of −*αT* against *T*. The resulting regression slope will always yield a correct estimate (*α*′) of the feedback parameter.

*N*≠ 0, as in our assumed case of daily random variations in radiative flux. In this case, regression of

*F*against

*T*gives for the estimate of

*α*

*α*to the degree that the summation of

*NT*is nonzero. Note that while

*S*does not explicitly appear in the error term, it does influence it by affecting the correlation of

*N*and

*T.*

To estimate the magnitude of these biases, we ran a zero-dimensional, finite-difference version of the model represented by Eq. (2) at daily time resolution in a series of case studies with a wide variety of assumed magnitudes for *α* and *N*, and for a constant *S* (discussed further below). Gaussian distributed daily noise with zero mean was used as the forcing for both *S* and *N*, where the noise level was varied for *N* but kept constant for *S*. Each case was run with a heat capacity equivalent to a 50-m-thick ocean mixed layer for 100 yr. Daily data were collected from each run, 31-day averages computed, and the feedback parameter *α* estimated by linear regression of average *T* against average *F* (= −*αT* + *N*).

We also noted which combinations of *α* and *N* produced 31-day average model output that approximated variations in the sea surface temperature and reflected TOA SW flux measured by satellite. For reflected SW variability, we computed the standard deviation of 30-day average anomalies in the National Aeronautics and Space Administration’s (NASA) *Terra* satellite Clouds and the Earth’s Radiant Energy System (CERES; Wielicki et al. 1996) reflected SW fluxes. For temperature variability we used Tropical Rain Measuring Mission (TRMM) satellite Microwave Imager (TMI; Kummerow et al. 1998) measurements of sea surface temperatures (Wentz et al. 2000). These measures of variability were computed for the tropical oceans in the latitude band 20°N–20°S, during the period March 2000 through December 2005. Averaging the satellite data to monthly time resolution greatly reduces the sampling errors that arise from incomplete coverage of the tropics by the satellites on short time scales. The resulting standard deviation of the satellite-observed 30-day-averaged SW fluxes was 1.3 W m^{−2}, and of the *T* variations was 0.134°C.

## 3. Model experiment results

### a. Example model run

To illustrate an example of the model output, we ran the model with daily radiative flux noise *N* sufficient to produce a 30-day standard deviation close to the satellite-observed value of 1.30 W m^{−2} for SW variability, with a daily random nonradiative ocean heat input *S* sufficient to result in a monthly standard deviation close to the satellite-observed value of 0.134°C, and with a specified feedback parameter *α* = 3.5 W m^{−2} K^{−1}. It is important to understand that the feedback parameter includes both the infrared Planck response component of 3.3 W m^{−2} K^{−1} (Forster and Taylor 2006) and any positive or negative feedbacks associated with clouds, water vapor, lapse rate, etc. Thus, feedback parameter values greater than 3.3 W m^{−2} K^{−1} correspond to negative feedback, while values less than 3.3 correspond to positive feedback. In our example, the difference between these two numbers (3.5 W m^{−2} K^{−1} minus the 3.3 W m^{−2} K^{−1} Planck response) represents a small, negative feedback component of 0.2 W m^{−2}.

The first 30 yr of the model output temperature time series (Fig. 1a) shows substantial interannual and decadal temperature variability. This is the result of the daily stochastic “cloud”-induced radiative inputs into the model’s 50-m ocean mixed layer being accumulated over time (Hasselman 1976). Depending upon the random number generator seed, the time series seen in Fig. 1a can change considerably.

When we average the model output to 31 days, and regress 80 yr of these averages of (−*αT* + *N*) against *T* (Fig. 1b), we obtain a regression slope of 2.94 W m^{−2} K^{−1}. Since this is less than the specified feedback parameter value of 3.5 W m^{−2} K^{−1}, the interpretation of this metric as a feedback parameter results in a bias of −0.56 W m^{−2} K^{−1}. This effectively makes our specified negative feedback of 0.2 W m^{−2} K^{−1} look like a positive feedback of −0.34 W m^{−2} K^{−1}. It should be noted that one can also use Eq. (4) to compute the same error value exactly.

Also note that there is considerable scatter in the relationship seen in Fig. 1b, with a relatively low explained variance of 13.5%. It might be significant that the feedback diagnoses of FG based upon satellite observations also had low explained variances, averaging 15% across all of their SW feedback estimates. In contrast, we found that if *N* is set to zero in the model runs (not shown) the resulting explained variance is always very high, over 95%. That is, when all radiative flux variations are from feedback on surface temperature variations, then those variations will always be highly correlated with each other. This suggests that the low explained variances in FG’s feedback diagnoses could themselves be evidence for nonfeedback sources of cloud variability.

### b. Monte Carlo simulations of feedback diagnosis errors

While our model is admittedly simple, it should allow some semiquantitative insight into the sign and possible magnitude of errors contained in observational estimates of feedback. By making many model runs for different combinations of the specified feedback parameter *α* and radiative flux noise *N* we can examine the range of resulting feedback errors. We found it is the ratio *N*/*S* that largely governs the errors in feedback estimates. Hence, *S* was kept fixed for all runs at a value that produced monthly standard deviations in temperature near the satellite-observed value.

In Fig. 2a, we see how the diagnosed feedback departs from the specified value as the amount of nonfeedback radiative flux noise is increased. The lines represent constant values of specified feedback parameter, while any point on the lines corresponds to a diagnosed feedback. For instance, as one follows a line from left to right (corresponding to an increase in the radiative flux noise, *N*), the diagnosed feedback departs from the true, specified feedback value on the ordinate. The solid lines are for specified feedbacks parameters greater than the infrared Planck response value of 3.3 W m^{−2} K^{−1}, thus indicating negative feedback, while the dashed lines represent specified feedback parameters less than 3.3 W m^{−2} K^{−1}, indicating positive feedback.

The dots plotted in Fig. 2a represent those cases where the 31-day standard deviations of *T* and (−*αT* + *N*) came within 20% of the satellite-observed values of 0.134°C and 1.3 W m^{−2}, respectively. It can be seen by the distribution of dots that the satellite-observed statistics are consistent with either a smaller daily radiative noise fraction (N/S of 0.2 to 0.4) combined with negative feedback; or alternatively a relatively larger N/S fraction (0.5 to over 1.0) combined with positive feedback.

The same data are also presented in Fig. 2b as a function of the error in the regression-diagnosed feedback parameter, rather than the diagnosed feedback parameter itself. This was produced by simply subtracting the curves’ *y* intercepts in Fig. 2a from all data. We can see from the plotted dots that the satellite observations are consistent with errors in diagnosed model feedback from about −0.1 to −0.8 W m^{−2} K^{−1}—albeit corresponding to a wide range of positive to negative feedbacks. We can further constrain the realistic range by noting that the total (LW + SW) feedback parameters diagnosed from observational data by FG range from 1.4 to 2.9 W m^{−2} K^{−1}, which in Fig. 2b corresponds to errors of −0.3 to −0.8 W m^{−2} K^{−1}.

Our error estimates are, of course, dependent upon a variety of model assumptions, possibly the most significant one being the ocean mixed layer depth. For heat capacities corresponding to depths greater than 50 m, somewhat smaller errors in feedback diagnosis are found. Alternatively, any observational estimates of feedback over land would correspond to smaller heat capacities, and so greater errors in feedback diagnosis. Also, since we used tropical oceanic satellite statistics to constrain our model runs to realistic ranges, our estimated feedback errors are not necessarily applicable to extratropical regions.

It is significant that all model errors for runs consistent with satellite-observed variability are in the direction of positive feedback, raising the possibility that current observational estimates of cloud feedback are biased in the positive direction.

## 4. Conclusions and discussion

Forcing of our simple model with daily random, nonfeedback radiative flux variations suggests the possibility of substantial positive biases in current observational estimates of feedback. When the outputs of many model realizations are constrained by both satellite-observed variability in tropical reflected SW and SST, and FG’s observational estimates of diagnosed (LW+SW) feedback, we obtain positive cloud feedback biases in the range −0.3 to −0.8 W m^{−2} K^{−1}. Since these errors are based upon satellite measured variability in tropical oceanic areas, they do not necessarily apply to extratropical regions.

Nevertheless, since FG’s observational estimates of total (SW+LW) feedback already represent a lower climate sensitivity than that produced by any of the 20 coupled climate models analyzed by Forster and Taylor (2006), our results suggest the possibility of an even larger discrepancy between models and observations than is currently realized.

What we have demonstrated is directly related to Stephens’ (2005) emphasis on how we perceive the operation of the climate system when diagnosing feedbacks. Stephens noted the overly simplistic nature of the system that is implicitly invoked when feedbacks are diagnosed from the covariability between observed radiative fluxes and surface temperature. Since it is well known that the processes that control cloud formation and dissipation are myriad, complex, and in general not perfectly correlated with surface temperature variations (e.g., vertical temperature and water vapor profiles, horizontal temperature gradients), the existence of nonfeedback sources of cloud variability should not be unexpected.

While we have used here the example of daily random variability in radiative fluxes which might be expected from the stochastic component of cloud behavior, it should be noted that feedback estimates could also be corrupted by other nonfeedback sources of variability on longer time scales, for example, from any radiative effects resulting from a small change in the general circulation of the ocean–atmosphere system.

Our results hopefully provide some semiquantitative insight into previously expressed concerns about the validity of cloud feedbacks diagnosed from observational data. They also underscore the need for new methods of diagnosing cloud feedback, as was advocated by Stephens (2005), and one example of which is the methodology developed by Aires and Rossow (2003).

## Acknowledgments

We gratefully acknowledge the help provided by the reviewers, Piers Forster and Isaac Held, both of whom suggested further simplification of our initial model; Isaac Held suggested the model, and its error term, that we used here. The TMI sea surface temperature product is produced by Remote Sensing Systems and is sponsored by the NASA Earth Science REASoN DISCOVER Project. The CERES data were obtained from the NASA Langley Research Center EOSDIS Distributed Active Archive Center. This research was supported by NOAA Contract NA05NES4401001 and DOE Contract DE-FG02-04ER63841.

## REFERENCES

Aires, F., and W. B. Rossow, 2003: Inferring instantaneous, multivariate and nonlinear sensitivities for analysis of feedbacks in a dynamical system: Lorenz model case study.

,*Quart. J. Roy. Meteor. Soc.***129****,**239–275.Battisti, D. S., C. M. Bitz, and R. E. Moritz, 1997: Do general circulation models underestimate the natural variability in the Arctic climate?

,*J. Climate***10****,**1909–1920.Cess, R. D., and Coauthors, 1990: Intercomparison and interpretation of climate feedback processes in 19 atmospheric general circulation models.

,*J. Geophys. Res.***95****,**16601–16615.Forster, P. M., and J. M. Gregory, 2006: The climate sensitivity and its components diagnosed from Earth Radiation Budget data.

,*J. Climate***19****,**39–52.Forster, P. M., and K. E. Taylor, 2006: Climate forcings and climate sensitivities diagnosed from coupled climate model integrations.

,*J. Climate***19****,**6181–6194.Hasselman, K., 1976: Stochastic climate models. Part I: Theory.

,*Tellus***28****,**289–305.Hurrell, J. W., 1995: Decadal trends in the North Atlantic oscillation regional temperature and precipitation.

,*Science***269****,**676–679.Kummerow, C., W. Barnes, T. Kozu, J. Shiue, and J. Simpson, 1998: The Tropical Rainfall Measuring Mission (TRMM) sensor package.

,*J. Atmos. Oceanic Technol.***15****,**809–817.Mantua, N. J., S. R. Hare, Y. Zhang, J. M. Wallace, and R. C. Francis, 1997: A Pacific interdecadal climate oscillation with impacts on salmon production.

,*Bull. Amer. Meteor. Soc.***78****,**1069–1079.Senior, C. A., and J. F. B. Mitchell, 1993: CO2 and climate: The impact of cloud parameterization.

,*J. Climate***6****,**393–418.Soden, B. J., and I. M. Held, 2006: An assessment of climate feedbacks in coupled ocean–atmosphere models.

,*J. Climate***19****,**3354–3360.Spencer, R. W., W. D. Braswell, J. R. Christy, and J. Hnilo, 2007: Cloud and radiation budget changes associated with tropical intraseasonal oscillations.

,*Geophys. Res. Lett.***34****.**L15707, doi:10.1029/2007GL029698.Stephens, G. L., 2005: Clouds feedbacks in the climate system: A critical review.

,*J. Climate***18****,**237–273.Webster, P. J., and G. L. Stephens, 1984: Cloud-radiation feedback and the climate problem.

*The Global Climate,*J. Houghton, Ed., Cambridge University Press, 63–78.Wentz, F., C. Gentemann, and D. Smith, 2000: Satellite measurements of sea surface temperature through clouds.

,*Science***288****,**847–850.Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System experiment.

,*Bull. Amer. Meteor. Soc.***77****,**853–868.

(a) Feedback relationships diagnosed from 100-yr model runs in which various strengths of feedback (*α*), nonfeedback daily random radiative input (*N*), and a constant nonradiative daily random ocean heat input (*S*) were specified. The dots represent model combinations that reproduce, to within 20%, the standard deviation of 30-day tropical oceanic average SW and *T* variations as observed in the tropical oceans by satellites during 2000 through 2005. (b) The same model output as a function of the error in diagnosed feedback. Solid lines represent negative cloud feedback (*α* > 3.3 W m^{−2} K^{−1}), while dashed lines represent positive cloud feedback (*α* < 3.3 W m^{−2} K^{−1}).

Citation: Journal of Climate 21, 21; 10.1175/2008JCLI2253.1

(a) Feedback relationships diagnosed from 100-yr model runs in which various strengths of feedback (*α*), nonfeedback daily random radiative input (*N*), and a constant nonradiative daily random ocean heat input (*S*) were specified. The dots represent model combinations that reproduce, to within 20%, the standard deviation of 30-day tropical oceanic average SW and *T* variations as observed in the tropical oceans by satellites during 2000 through 2005. (b) The same model output as a function of the error in diagnosed feedback. Solid lines represent negative cloud feedback (*α* > 3.3 W m^{−2} K^{−1}), while dashed lines represent positive cloud feedback (*α* < 3.3 W m^{−2} K^{−1}).

Citation: Journal of Climate 21, 21; 10.1175/2008JCLI2253.1

(a) Feedback relationships diagnosed from 100-yr model runs in which various strengths of feedback (*α*), nonfeedback daily random radiative input (*N*), and a constant nonradiative daily random ocean heat input (*S*) were specified. The dots represent model combinations that reproduce, to within 20%, the standard deviation of 30-day tropical oceanic average SW and *T* variations as observed in the tropical oceans by satellites during 2000 through 2005. (b) The same model output as a function of the error in diagnosed feedback. Solid lines represent negative cloud feedback (*α* > 3.3 W m^{−2} K^{−1}), while dashed lines represent positive cloud feedback (*α* < 3.3 W m^{−2} K^{−1}).

Citation: Journal of Climate 21, 21; 10.1175/2008JCLI2253.1