Observations of Climate Feedbacks over 2000–10 and Comparisons to Climate Models

A. E. Dessler Department of Atmospheric Sciences, Texas A&M University, College Station, Texas

Search for other papers by A. E. Dessler in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Feedbacks in response to climate variations during the period 2000–10 have been calculated using reanalysis meteorological fields and top-of-atmosphere flux measurements. Over this period, the climate was stabilized by a strongly negative temperature feedback (~−3 W m−2 K−1); climate variations were also amplified by a strong positive water vapor feedback (~+1.2 W m−2 K−1) and smaller positive albedo and cloud feedbacks (~+0.3 and +0.5 W m−2 K−1, respectively). These observations are compared to two climate model ensembles, one dominated by internal variability (the control ensemble) and the other dominated by long-term global warming (the A1B ensemble). The control ensemble produces global average feedbacks that agree within uncertainties with the observations, as well as producing similar spatial patterns. The most significant discrepancy was in the spatial pattern for the total (shortwave + longwave) cloud feedback. Feedbacks calculated from the A1B ensemble show a stronger negative temperature feedback (due to a stronger lapse-rate feedback), but that is cancelled by a stronger positive water vapor feedback. The feedbacks in the A1B ensemble tend to be more smoothly distributed in space, which is consistent with the differences between El Niño–Southern Oscillation (ENSO) climate variations and long-term global warming. The sum of all of the feedbacks, sometimes referred to as the thermal damping rate, is −1.15 ± 0.88 W m−2 K−1 in the observations and −0.60 ± 0.37 W m−2 K−1 in the control ensemble. Within the control ensemble, models that more accurately simulate ENSO tend to produce thermal damping rates closer to the observations. The A1B ensemble average thermal damping rate is −1.26 ± 0.45 W m−2 K−1.

Supplemental information related to this paper is available at the Journals Online website: http://dx.doi.org/10.1175/JCLI-D-11-00640.s1.

Corresponding author address: A. E. Dessler, Dept. of Atmospheric Sciences, Texas A&M University, College Station, TX 77843. E-mail: adessler@tamu.edu

Abstract

Feedbacks in response to climate variations during the period 2000–10 have been calculated using reanalysis meteorological fields and top-of-atmosphere flux measurements. Over this period, the climate was stabilized by a strongly negative temperature feedback (~−3 W m−2 K−1); climate variations were also amplified by a strong positive water vapor feedback (~+1.2 W m−2 K−1) and smaller positive albedo and cloud feedbacks (~+0.3 and +0.5 W m−2 K−1, respectively). These observations are compared to two climate model ensembles, one dominated by internal variability (the control ensemble) and the other dominated by long-term global warming (the A1B ensemble). The control ensemble produces global average feedbacks that agree within uncertainties with the observations, as well as producing similar spatial patterns. The most significant discrepancy was in the spatial pattern for the total (shortwave + longwave) cloud feedback. Feedbacks calculated from the A1B ensemble show a stronger negative temperature feedback (due to a stronger lapse-rate feedback), but that is cancelled by a stronger positive water vapor feedback. The feedbacks in the A1B ensemble tend to be more smoothly distributed in space, which is consistent with the differences between El Niño–Southern Oscillation (ENSO) climate variations and long-term global warming. The sum of all of the feedbacks, sometimes referred to as the thermal damping rate, is −1.15 ± 0.88 W m−2 K−1 in the observations and −0.60 ± 0.37 W m−2 K−1 in the control ensemble. Within the control ensemble, models that more accurately simulate ENSO tend to produce thermal damping rates closer to the observations. The A1B ensemble average thermal damping rate is −1.26 ± 0.45 W m−2 K−1.

Supplemental information related to this paper is available at the Journals Online website: http://dx.doi.org/10.1175/JCLI-D-11-00640.s1.

Corresponding author address: A. E. Dessler, Dept. of Atmospheric Sciences, Texas A&M University, College Station, TX 77843. E-mail: adessler@tamu.edu

1. Introduction

Feedbacks change the top-of-atmosphere (TOA) net energy balance in response to a change in surface temperature, thereby altering the warming required to reestablish equilibrium. In fact, it turns out that feedbacks, rather than direct heating from greenhouse gases, are responsible for the majority of the warming we expect over the coming century. Because of this, validating the representation of feedbacks in global climate models (GCMs) is of great interest to the scientific community.

Ideally, we would estimate the magnitude of the feedbacks from observations of long-term warming covering decades or even centuries. Unfortunately, accurate global measurements of the parameters of interest for feedbacks (particularly atmospheric water vapor, temperature, and clouds) are only available for about a decade. And over this time, the dominant climate variations were from the El Niño–Southern Oscillation (ENSO). In this paper, I will analyze the feedbacks over the period March 2000 to December 2010 in response to ENSO and compare the results to control simulations of coupled GCMs, whose climate is also dominated by internal climate variability. These results will then be compared to feedbacks in simulations of long-term warming in order to assess how these feedbacks differ from those in response to internal variability.

2. Breakdown of TOA net flux

TOA net flux anomaly (ΔRall-sky) has been accurately observed for the last decade by the Clouds and the Earth’s Radiant Energy System (CERES) (Wielicki et al. 1996) onboard the National Aeronautics and Space Administration’s (NASA’s) Terra and Aqua satellites (anomalies in this paper are the deviations from the mean annual cycle). Using previously developed techniques (Soden et al. 2008; Shell et al. 2008), ΔRall-sky is decomposed into its constituent components: the contributions from surface skin and atmospheric temperature anomalies (ΔRT), atmospheric water vapor anomalies (ΔRq), cloud anomalies (ΔRcloud), and surface albedo anomalies (ΔRα).

Briefly, ΔRx (where x = T, q, or α) is calculated by taking the anomaly field for x for each month and multiplying it by a radiative kernel that converts the anomaly to an anomaly in global-average TOA flux. The calculation of ΔRcloud is done slightly differently: I start with the cloud radiative forcing anomaly and adjust it for the impacts of changes in T, q, and albedo.

Calculations of ΔRT, ΔRq, ΔRα, and global average surface skin temperature anomaly (ΔTs) are made using monthly average reanalysis meteorological field anomalies [from the interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERA-Interim; Dee et al. 2011) or NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA; Rienecker et al. 2011)] and the radiative kernels of Soden et al. (2008). The ΔRcloud calculation uses CERES measurements of ΔRall-sky and reanalysis calculations of clear-sky flux; see Dessler (2010) for details. For compactness, I will refer to these calculations using reanalysis fields as “the observations.”

For the ΔRcloud calculation in the observations, I have assumed a radiative forcing of +0.2 W m−2 over the 2000–10 period (Solomon et al. 2011). For the A1B calculations, I have assumed a radiative forcing of +4.3 W m−2 over the twenty-first century (Ramaswamy et al. 2001, Tables 6.14 and 6.15), although there is some variations among the GCMs. There is no change in radiative forcing in the control runs.

3. Global average feedbacks in observations

In this paper, I will concern myself with the so-called fast feedbacks, which work on time scales of days to a few years—this includes the temperature, water vapor, albedo, and cloud feedbacks. The magnitude of the individual feedbacks can be calculated as the slope of the linear least squares fit between the corresponding fluxes (ΔRT, ΔRq, ΔRα, and ΔRcloud) and ΔTs—as done, for example, by Dessler and Wong (2009). These scatterplots, calculated using ERA-Interim and containing 130 months of data, are shown in Fig. 1, along with linear least squares fits and 2σ confidence intervals for the fit. The sign convention is that downward fluxes are positive, so a positive slope means that more energy is absorbed by the earth–atmosphere system as the climate warms (i.e., a positive feedback).

Fig. 1.
Fig. 1.

Scatterplot of the temperature (ΔRT), water vapor (ΔRq), albedo (ΔRα), and cloud (ΔRcloud) flux anomalies vs surface temperature anomaly in the observations (using the ERA-Interim reanalysis). Also shown are a linear fit to the data and the 95% confidence intervals.

Citation: Journal of Climate 26, 1; 10.1175/JCLI-D-11-00640.1

Calculated feedback values from the ERA-Interim reanalysis are summarized in Table 1, along with values calculated using the MERRA. Overall, the feedbacks calculated from the two reanalyses agree within uncertainties and with our general expectations. The strongest feedback is the temperature feedback, which is the negative feedback that stabilizes our climate. The temperature feedback is frequently broken into two components: the Planck feedback, which is the response of TOA flux to a uniform warming of the surface and atmosphere, and the lapse-rate feedback, which is the response of TOA flux to the warming of the atmosphere relative to the surface. These components are also listed in Table 1. The Planck feedback dominates the temperature feedback, and almost all of the disagreement between the ERA-Interim and MERRA temperature feedback comes from the lapse-rate component.

Table 1.

All quantities are in W m−2 K−1 and RH is relative humidity. For the ERA-Interim and MERRA, the uncertainties are 2σ. For the control and A1B model ensembles, the feedback is the average of the model ensemble and the uncertainty is the standard deviation of the ensemble.

Table 1.

The water vapor feedback is our planet’s primary positive feedback, and it significantly retards the ability of the earth to increase radiation to space as the planet warms. Forster and Collins (2004) calculated a best-estimate value of +1.6 W m−2 K−1 using climate changes after the eruption of Mt. Pinatubo. The values calculated here are smaller than those calculated by Dessler and Wong (2009, their Fig. 2a) because of differences in the type of fit used to determine the slope. In this paper, I am using a traditional least squares fit, which minimizes the distance in the y direction between the data and the fit. Dessler and Wong calculated the fit as the first EOF of the data, which is equivalent to minimizing the orthogonal distance between the data and the fit. Had I used the same type of fit as Dessler and Wong, I would have gotten values similar to theirs.

Table 1 also contains feedbacks obtained from an alternative breakdown of the feedbacks where temperature and water vapor changes are considered together (Held and Shell 2012). In this decomposition, the temperature feedbacks also include changes in water vapor necessary to maintain constant relative humidity (RH). The main advantage of this method, hereafter referred to as the “constant-RH breakdown,” arises because errors in the temperature and water vapor feedbacks tend to be anticorrelated (e.g., Colman 2003; Soden and Held 2006; Dessler and Sherwood 2009), so the combined feedback should be better constrained than either the temperature or water vapor feedbacks are individually (e.g., Ingram 2010). As with the conventional breakdown, most of the differences are in the lapse rate–RH feedback.

The ΔRH feedback is, as the name suggests, due to changes in RH. The small value for this feedback provides support for the idea that, in the global average, the water vapor feedback is close to one where the atmosphere maintains constant RH everywhere.

Our calculations show that the albedo feedback is positive, but with a magnitude much smaller than the temperature and water vapor feedbacks. The cloud feedback is positive and about twice the strength of the albedo feedback. While there is good agreement for the total cloud feedback in the two reanalyses, the calculations using MERRA predict that the majority of the cloud feedback comes from changes in the shortwave effects of clouds, while the calculations using the ERA-Interim suggest that almost all of the cloud feedback is due to changes in the longwave. Note that both the ERA-Interim and MERRA calculations use the same ΔRall-sky measurements from CERES.

It is clear from the scatterplot in Fig. 1 that the cloud feedback is the most uncertain, due primarily to the relatively poor correlation between ΔRcloud and ΔTs. This uncertainty in the cloud feedback in turn makes determining the overall climate sensitivity difficult (e.g., Dufresne and Bony 2008; Webb et al. 2006).

4. Global average feedbacks in models

One obvious question is whether GCMs can reproduce these feedbacks. To test this, I have calculated the feedbacks in preindustrial control runs of 13 fully coupled GCMs—runs in which greenhouse gas abundances and other forcings are held constant at their preindustrial concentrations. Thus, the climate variations in the simulations are due to internal variability, making them the most appropriate to compare to the observations over the past decade, which are also dominated by internal variability. The control runs, as well as the A1B runs discussed below, were obtained from the World Climate Research Programme’s (WCRP’s) Coupled Model Intercomparison Project phase 3 (CMIP3) multimodel dataset (Meehl et al. 2007).

The feedbacks for each model are calculated using the same techniques on years 100–300 of the simulation (containing 2400 months), and the feedbacks from the individual GCMs are then averaged to obtain an ensemble average feedback. Table 1 lists the ensemble average feedbacks and one standard deviation of the GCM ensemble; results for the individual GCMs are listed in supplemental Table S1 (see the supplement to this article, available at http://dx.doi.org/10.1175/JCLI-D-11-00640.s1). Overall, the ensemble average feedbacks agree within uncertainties with the observations. This provides confidence that the GCMs are reasonably simulating our climate system’s feedbacks. There is certainly no evidence that the GCMs are radically over- or underestimating any of the feedbacks.

Dessler and Wong (2009) pointed out that ΔRq is mainly determined by tropical surface temperatures (ΔTtropics). They also showed that a least squares fit between ΔRq and ΔTtropics yields closer agreement among the models than the regression against ΔTs. I verify that this is also true here: the average of the regression of ΔRq versus ΔTtropics is +1.39 W m−2 K−1 for the control ensemble, very close to the value in Table 1. But the standard deviation of that ensemble is 0.37 W m−2 K−1, about 40% smaller than the standard deviation of the regression versus ΔTs (i.e., the water vapor feedback). This suggests that this metric might be a better way to compare the water vapor response to changes in surface temperature.

It is the feedbacks in response to long-term climate change that are relevant for understanding climate change over the twenty-first century, and evidence exists that these can be quite different from the feedbacks in response to internal climate variability (e.g., Dessler 2010; Colman and Power 2010). To address this, feedbacks have also been calculated in 1200 months of data covering the twenty-first century of runs of the same 13 GCMs driven by the A1B emissions scenario, a moderate emissions scenario in which the earth warms by several degrees Celsius over the twenty-first century. Table 1 lists the ensemble average feedbacks and standard deviation of the ensemble; results for the individual GCMs are listed in Table S1.

Overall, the feedbacks in the A1B ensemble are similar to previously published estimates of feedbacks in response to long-term climate change (e.g., Colman 2003; Soden and Held 2006). More interesting perhaps is the comparison to the feedbacks in the observations and in the control ensemble. The main difference is a stronger, more negative lapse-rate feedback in the A1B ensemble. This increase in the negative lapse-rate feedback is almost entirely canceled by an increase in the strength of the positive water vapor feedback. This is expected since a warmer upper troposphere should be associated with an increase in water vapor (e.g., Minschwaner and Dessler 2004).

Also of note is that the cloud feedback is smaller in the A1B ensemble due to a reduction in the shortwave cloud feedback. In the constant-RH breakdown, the main difference between the model ensembles is a shift from a positive lapse rate/RH feedback in the control ensemble to a negative one in the A1B ensemble.

Finally, for each feedback (T, q, α, and cloud), I have regressed the values for that feedback from the individual GCMs in the control ensemble against the values from the same model in the A1B ensemble. I do not find a statistically significant (at the 2σ level) relationship for any of the feedbacks. So a strong feedback in a particular model in the control run does not provide a firm prediction about the strength of that same feedback in that model’s A1B run. This was pointed out for the cloud feedback by Dessler (2010) and it again underscores the fundamental difference between internal variability and long-term climate change.

5. Spatial distribution of the temperature and water vapor feedbacks

Figure 2 shows the observed spatial distribution of the temperature and water vapor feedbacks calculated from the ERA-Interim reanalysis (Fig. S1 shows the corresponding patterns for the Planck and lapse-rate feedbacks). The value plotted is the slope of the linear least squares fit between the local ΔR time series and the global average ΔTs time series (other analyses have calculated this quantity differently; e.g., the regression of local ΔR against local ΔTs; e.g., Boer and Yu 2003; Crook et al. 2011). Blue indicates regions where local ΔR decreases as ΔTs increases—these regions contribute a negative feedback. Red regions contribute a positive feedback. The global average of the local feedbacks plotted in Fig. 2 is equal to the global-average feedbacks in Table 1.

Fig. 2.
Fig. 2.

The spatial distribution of (left) the temperature and (right) water vapor feedbacks for (top) ERA-Interim reanalysis, (middle) the control ensemble, and (bottom) the A1B ensemble. Red (blue) indicates a local positive (negative) feedback. See text for details on how these plots are constructed. Note that the color scale varies among the panels.

Citation: Journal of Climate 26, 1; 10.1175/JCLI-D-11-00640.1

The top row of Fig. 2 shows the observed feedback pattern obtained from the ERA-Interim reanalysis. Because the climate variations over the last decade are primarily driven by ENSO, it should come as no surprise that the observations are dominated by variations in the tropical Pacific. While the temperature feedback is mainly negative, reflecting the fact that warmer objects radiate more power, there are regions in the midlatitudes where the local temperature feedback is positive. These are regions where the local surface cools even as the global average surface temperature increases (e.g., Alexander et al. 2002, their Fig. 2). Also note the convectively active western Pacific, where the local temperature feedback is weak, mainly due to the weak temperature response of high clouds (e.g., Hartmann and Larson 2002).

The observed water vapor feedback is also focused on the tropical Pacific, as well as in the tropical Atlantic, just north of the equator. Negative local feedbacks in the subtropics reflect a drying in the midtroposphere as the climate warms during ENSO (e.g., Dessler et al. 2008).

The middle row shows the control ensemble average. The pattern is close to the observations, although the magnitudes tend to be smaller (note that the scales are different). The bottom row shows the A1B ensemble average, and it shows that the feedbacks in response to long-term global warming are more uniformly distributed in space and are the same sign everywhere. This occurs because long-term global warming is more uniform over the globe than the temperature variations in response to ENSO variations. Colman and Power (2010) found a similar difference for these feedbacks between their control and climate-change scenarios.

Figure 3 shows the zonal average feedbacks from the observations and the control and A1B ensemble averages. In the temperature feedback, most of the energy radiated to space comes from the deep tropics and the polar regions. Overall, the observations and control ensemble agree relatively well, as expected given the good agreement in the global average. The observed water vapor feedback shows a main peak sharply focused over the equator that is associated with the Pacific response and a second peak near 25°N is associated with the tropical Atlantic. The peak over the equator is well simulated by the control ensemble, while the peak centered at 25°N is underestimated. Figure S2 shows the zonal averages of the Planck and lapse-rate feedbacks; that figure shows that the MERRA lapse-rate feedback is consistently larger than that in the ERA-Interim at all latitudes.

Fig. 3.
Fig. 3.

The zonal average temperature (bottom curves) and water vapor feedbacks (top curves). Observations are the solid lines (black is ERA-Interim and red is MERRA) and the models are dashed (black dashed is the control ensemble and red dashed is the A1B ensemble). The shading indicates one standard deviation about the average of the control ensemble. Error bars indicate the 2σ uncertainty of the fit for the ERA-Interim calculation at selected latitudes.

Citation: Journal of Climate 26, 1; 10.1175/JCLI-D-11-00640.1

Figure 4 shows the zonal average feedbacks for the constant-RH breakdown (the full spatial patterns are plotted in supplemental Fig. S3). Viewed using this breakdown, there is better agreement between observations and the control ensemble average and less spread within the control ensemble than when considering the temperature and water vapor feedbacks separately (Fig. 3). This is the main benefit of this breakdown. It is clear that most of the differences in the combined temperature/water feedbacks are in the ΔRH term—the term that reflects changes in RH as the climate varies.

Fig. 4.
Fig. 4.

The zonal average Planck–RH, lapse-rate–RH, and ΔRH feedbacks (these are from an alternative decomposition of the feedbacks in which the Planck and lapse-rate feedbacks also include changes in water vapor needed to maintain constant RH). Observations are the solid lines (black is ERA-Interim and red is MERRA) and the models are dashed (black dashed is the control ensemble and red dashed is the A1B ensemble). The shading indicates one standard deviation about the average of the control ensemble. Error bars indicate the 2σ uncertainty of the fit for the ERA-Interim calculation at selected latitudes.

Citation: Journal of Climate 26, 1; 10.1175/JCLI-D-11-00640.1

The ΔRH feedback shows oscillations between positive values (latitudes where RH increases as the climate warms) and negative values (where RH decreases) in response to interannual variations. While the global average ΔRH feedback is close to zero, this should not be taken to mean the atmosphere actually maintains constant RH—rather, RH does vary during these internal variations, but regions where RH increases and decreases cancel each other in the global mean (e.g., Dessler et al. 2008). In the A1B scenarios, the atmosphere does more closely approximate one in which RH is fixed at each latitude.

6. Spatial distribution of the cloud feedback

Figure 5 shows the spatial distribution of the cloud feedback in the observations and GCMs. As in Fig. 3, blue (red) regions indicate a negative (positive) local feedback. The longwave and shortwave feedbacks are essentially mirror images of each other, a consequence of the fact that clouds both trap longwave radiation and reflect shortwave radiation back to space.

Fig. 5.
Fig. 5.

(left) Longwave cloud feedback, (middle) shortwave cloud feedback, and (right) net cloud feedbacks, for (top) ERA-Interim reanalysis combined with CERES measurements of all-sky flux, (middle) the control ensemble, and (bottom) the A1B ensemble. In all panels, the units are W m−2 K−1. Note that the color scale varies among the panels.

Citation: Journal of Climate 26, 1; 10.1175/JCLI-D-11-00640.1

The longwave cloud feedback is dominated in the observations and both model ensembles by a positive maximum in the tropical Pacific; the shortwave cloud feedback is dominated by a negative maximum in the same location. For the observations and control ensemble, which are dominated by internal variability, these maxima are surrounded by regions of the opposite sign feedback. This is a consequence of the reorganization of the circulation, which causes compensating changes in clouds in adjacent regions, leading to opposite-signed responses.

For the A1B ensemble, which features long-term warming, the feedbacks are due not to wholesale changes in the circulation, but to more subtle changes in cloud fraction, height, and microphysical properties. As a result, the feedbacks are more uniformly distributed.

The total cloud feedback is the sum of the opposing shortwave and longwave terms. There are general similarities between the total cloud feedback in the observations and the control ensemble but one glaring difference: almost all of the control runs predict a positive total cloud feedback in the tropical Pacific, which is conspicuously missing in the observations. As with the previous feedbacks, the total cloud feedback in the A1B ensemble is spatially more uniform than the total cloud feedback in response to ENSO.

Figure 6 shows the zonal average of the observed cloud feedbacks and the model ensemble averages; zonal averages for the control runs of the individual GCMs are shown in supplemental Figs. S4–S6. For the observations and control ensemble average, both of which are dominated by internal variability, the longwave feedback shows an oscillatory structure: it is positive in the deep tropics ~10°N–10°S, then negative from ~10°–25° in both hemispheres, then mostly positive again from ~25° in each hemisphere to the pole. This pattern reflects, among other things, shifts in the storm tracks in response to ENSO (e.g., Trenberth and Hurrell 1994). The shortwave feedback is similar, but opposite in sign.

Fig. 6.
Fig. 6.

The zonal average cloud feedbacks: (left to right) longwave, shortwave, and total radiation. Observations are the solid lines (black is ERA-Interim and red is MERRA) and the models are dashed (black dashed is the control ensemble and red dashed is the A1B ensemble). The shading indicates one standard deviation about the average of the control ensemble. Error bars indicate the 2σ uncertainty of the fit for the ERA-Interim calculation at selected latitudes.

Citation: Journal of Climate 26, 1; 10.1175/JCLI-D-11-00640.1

The total cloud feedback in the observations shows negative cloud feedbacks in the deep tropics (15°S–6°N) and high southern latitudes (40°–75°S) and positive cloud feedbacks at most other latitudes. Averaging over the globe yields a positive total cloud feedback. As this plot makes clear, attempts to determine the cloud feedback by looking just at a particular latitude range, such as 20°N–20°S (e.g., Lindzen and Choi 2009) are likely to be considerably in error.

Compared to observations, the control ensemble overestimates the positive longwave cloud feedback in the tropics and underestimates the negative shortwave cloud feedback there. These errors add, and the control ensemble ends up with a positive total cloud feedback in the tropics, opposite to that seen in the observations. On the other hand, the control ensemble shows a near-zero total cloud feedback in the Northern Hemisphere extratropics while the observations show a strong positive cloud feedback there (driven primarily by shortwave cloud feedback). In the computation of the global average total cloud feedback, the disagreement in the tropics largely cancels the disagreement poleward of 30°N, leading to agreement for the global average.

In the A1B ensemble, the zonal average longwave cloud feedback is positive at all latitudes. This is due to the expectation that high clouds will rise in future climates (Zelinka and Hartmann 2010), which leads to a robust positive feedback. The shortwave cloud feedback is also positive over most of the globe. As a result, the total cloud feedback in the A1B ensemble is also positive at most latitudes.

7. Thermal damping rate

The thermal damping rate (also referred to as the feedback parameter) is the total rate at which planetary temperature anomalies are damped by radiation to space. This quantity, which has the units of W m−2 K−1, is related to the climate sensitivity for forced climate change.

To estimate this quantity, the sum of ΔRT + ΔRq + ΔRα + ΔRcloud is regressed against ΔTs, with the slope being the thermal damping rate. Thus, it is mathematically equal to the sum of the individual feedbacks. The values are listed in Table 1. The average thermal damping rate in the in the control ensemble is −0.60 ± 0.37 W m−2 K−1; this is about half the value of the observations, although within the uncertainty. But while the control ensemble average is lower than the observations, supplemental Table S1 shows that some of the GCMs [e.g., the Geophysical Fluid Dynamics Laboratory Climate Model version 2.1 (GFDL CM 2.1) and Max Planck Institute (MPI) ECHAM5] accurately reproduce the global average thermal damping rate. These are among the GCMs that have been identified as realistically reproducing ENSO (Lin 2007).

Some of the GCMs have thermal damping rates in the control runs that are close to zero. This means that the temperature perturbations associated with internal variability in these runs are not thermally damped. Rather, the temperature perturbations must be damped by some other means, such as the exchange of heat with the ocean. The clear implication is that these GCMs’ simulations do not produce realistic ENSO cycles. For these models, the thermal damping rate in response to internal variability must be completely disconnected from the thermal damping in response to long-term warming.

There are also disagreements between the spatial pattern of the thermal damping rate in the control runs and the observations (supplemental Figs. S7 and S8). These patterns are similar to the disagreements in the cloud feedback patterns discussed in the previous section, suggesting the disagreement may be related to clouds.

For the A1B ensemble, the thermal damping rate is −1.26 ± 0.45 W m−2 K−1, about double that seen in the control ensemble. This value is quite close to the observations, although the implications of that agreement are not clear. However, it is clear that, in both the observations and the model ensembles, positive feedbacks significantly retard the ability of the Earth to radiate heat to space as the planet warms. This in turn suggests a climate sensitivity to doubled CO2 is well above the 1°C obtained without any feedbacks.

8. Summary

Our understanding of climate feedbacks has improved greatly over the last two decades (e.g., Bony et al. 2006). High-quality observations of the atmosphere, processed by reanalysis systems, have allowed us to continue this improvement by calculating the fast feedbacks (temperature, water vapor, albedo, and clouds) in response to climate variations over the period from 2000 to 2010.

The results, listed in Table 1, confirm that the temperature feedback is the main stabilizing feedback for our climate, and that the water vapor feedback is the climate’s main positive feedback. The albedo and cloud feedbacks are also both positive, but smaller than the water vapor feedback over this time period.

The observations have also been compared to control runs of GCMs, which are also dominated by internal variability and therefore should be the most comparable. Feedbacks in the control ensemble are in good agreement with the observations—not just in global average magnitude but also in spatial pattern. The main discrepancy is in the spatial pattern of the total cloud feedback.

The observations and control ensembles have also been compared to an ensemble of GCMs driven by the A1B scenario, a moderate emissions scenario in which the earth warms by several degrees Celsius over the twenty-first century. The magnitude of the global average feedbacks in the A1B ensemble is generally similar to the feedbacks in the observations and control ensemble. Specific differences include a stronger negative lapse-rate feedback that is balanced by a stronger positive water vapor feedback. Feedbacks in the A1B runs tend to be spatially smoother and do not show the wholesale reorganization of the circulation that accompanies ENSO variations.

Given these results, three points are worth emphasizing. First, the comparison between the observations and the control runs (the most relevant comparison) provides no evidence that GCMs are radically over- or underestimating the climate system’s feedbacks. There are, however, some issues in the pattern of the cloud feedback that require further research. Second, the differences in the feedbacks between the control model runs and the A1B model runs stress that one should be careful in applying conclusions about the feedbacks derived from internal variability to longer-term climate change. Finally, this analysis confirms that the biggest uncertainty is the cloud feedback.

Acknowledgments

This work was supported by NSF Grant AGS-1012665 to Texas A&M University. I also thank M. Zelinka for helpful comments. I also acknowledge the modeling groups, the Program for Climate Model Diagnosis and Intercomparison, and the WCRP’s Working Group on Coupled Modeling for their roles in making available the WCRP CMIP3 multimodel dataset. Support of this dataset is provided by the U.S. Department of Energy Office of Science.

REFERENCES

  • Alexander, M. A., I. Bladé, M. Newman, J. R. Lanzante, N. C. Lau, and J. D. Scott, 2002: The atmospheric bridge: The influence of ENSO teleconnections on air–sea interaction over the global oceans. J. Climate, 15, 22052231.

    • Search Google Scholar
    • Export Citation
  • Boer, G. J., and B. Yu, 2003: Climate sensitivity and response. Climate Dyn., 20, 415429, doi:10.1007/s00382-002-0283-3.

  • Bony, S., and Coauthors, 2006: How well do we understand and evaluate climate change feedback processes? J. Climate, 19, 34453482.

  • Colman, R. A., 2003: A comparison of climate feedbacks in general circulation models. Climate Dyn., 20, 865873.

  • Colman, R. A., and S. B. Power, 2010: Atmospheric radiative feedbacks associated with transient climate change and climate variability. Climate Dyn., 34, 919933, doi:10.1007/s00382-009-0541-8.

    • Search Google Scholar
    • Export Citation
  • Crook, J. A., P. M. Forster, and N. Stuber, 2011: Spatial patterns of modeled climate feedback and contributions to temperature response and polar amplification. J. Climate, 24, 35753592.

    • Search Google Scholar
    • Export Citation
  • Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553597, doi:10.1002/qj.828.

    • Search Google Scholar
    • Export Citation
  • Dessler, A. E., 2010: A determination of the cloud feedback from climate variations over the past decade. Science, 330, 15231527, doi:10.1126/science.1192546.

    • Search Google Scholar
    • Export Citation
  • Dessler, A. E., and S. C. Sherwood, 2009: A matter of humidity. Science, 323, 10201021, doi:10.1126/Science.1171264.

  • Dessler, A. E., and S. Wong, 2009: Estimates of the water vapor climate feedback during the El Niño–Southern Oscillation. J. Climate, 22, 64046412.

    • Search Google Scholar
    • Export Citation
  • Dessler, A. E., P. Yang, and Z. Zhang, 2008: The water-vapor climate feedback inferred from climate fluctuations, 2003–2008. Geophys. Res. Lett., 35, L20704, doi:10.1029/2008GL035333.

    • Search Google Scholar
    • Export Citation
  • Dufresne, J. L., and S. Bony, 2008: An assessment of the primary sources of spread of global warming estimates from coupled atmosphere–ocean models. J. Climate, 21, 51355144.

    • Search Google Scholar
    • Export Citation
  • Forster, P. M. D., and M. Collins, 2004: Quantifying the water vapour feedback associated with post-Pinatubo global cooling. Climate Dyn., 23, 207214.

    • Search Google Scholar
    • Export Citation
  • Hartmann, D. L., and K. Larson, 2002: An important constraint on tropical cloud - climate feedback. Geophys. Res. Lett., 29, 1951, doi:10.1029/2002GL015835.

    • Search Google Scholar
    • Export Citation
  • Held, I., and K. M. Shell, 2012: Using relative humidity as a state variable in climate feedback analysis. J. Climate, 25, 25782582.

  • Ingram, W., 2010: A very simple model for the water vapour feedback on climate change. Quart. J. Roy. Meteor. Soc., 136, 3040, doi:10.1002/qj.546.

    • Search Google Scholar
    • Export Citation
  • Lin, J. L., 2007: Interdecadal variability of ENSO in 21 IPCC AR4 coupled GCMs. Geophys. Res. Lett., 34, L12702, doi:10.1029/2006GL028937.

    • Search Google Scholar
    • Export Citation
  • Lindzen, R. S., and Y.-S. Choi, 2009: On the determination of climate feedbacks from ERBE data. Geophys. Res. Lett., 36, L16705, doi:10.1029/2009gl039628.

    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., C. Covey, T. Delworth, M. Latif, B. McAvaney, J. F. B. Mitchell, R. J. Stouffer, and K. E. Taylor, 2007: The WCRP CMIP3 multimodel dataset: A new era in climate change research. Bull. Amer. Meteor. Soc., 88, 13831394.

    • Search Google Scholar
    • Export Citation
  • Minschwaner, K., and A. E. Dessler, 2004: Water vapor feedback in the tropical upper troposphere: Model results and observations. J. Climate, 17, 12721282.

    • Search Google Scholar
    • Export Citation
  • Ramaswamy, V., and Coauthors, 2001: Radiative forcing of climate change. Climate Change 2001: The Scientific Basis, J. T. Houghton et al., Eds., Cambridge University Press, 349–416.

  • Rienecker, M. M., and Coauthors, 2011: MERRA: NASA’s Modern-Era Retrospective Analysis for Research and Applications. J. Climate, 24, 36243648.

    • Search Google Scholar
    • Export Citation
  • Shell, K. M., J. T. Kiehl, and C. A. Shields, 2008: Using the radiative kernel technique to calculate climate feedbacks in NCAR’s Community Atmospheric Model. J. Climate, 21, 22692282.

    • Search Google Scholar
    • Export Citation
  • Soden, B. J., and I. M. Held, 2006: An assessment of climate feedbacks in coupled ocean–atmosphere models. J. Climate, 19, 33543360.

    • Search Google Scholar
    • Export Citation
  • Soden, B. J., I. M. Held, R. Colman, K. M. Shell, J. T. Kiehl, and C. A. Shields, 2008: Quantifying climate feedbacks using radiative kernels. J. Climate, 21, 35043520.

    • Search Google Scholar
    • Export Citation
  • Solomon, S., J. S. Daniel, R. R. Neely, J. P. Vernier, E. G. Dutton, and L. W. Thomason, 2011: The persistently variable “background” stratospheric aerosol layer and global climate change. Science, 333, 866870, doi:10.1126/science.1206027.

    • Search Google Scholar
    • Export Citation
  • Trenberth, K. E., and J. W. Hurrell, 1994: Decadal atmosphere–ocean variations in the Pacific. Climate Dyn., 9, 303319, doi:10.1007/bf00204745.

    • Search Google Scholar
    • Export Citation
  • Webb, M. J., and Coauthors, 2006: On the contribution of local feedback mechanisms to the range of climate sensitivity in two GCM ensembles. Climate Dyn., 27, 1738.

    • Search Google Scholar
    • Export Citation
  • Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System experiment. Bull. Amer. Meteor. Soc., 77, 853868.

    • Search Google Scholar
    • Export Citation
  • Zelinka, M., and D. L. Hartmann, 2010: Why is longwave cloud feedback positive? J. Geophys. Res., 115, D16117, doi:10.1029/2010JD013817.

    • Search Google Scholar
    • Export Citation

Supplementary Materials

Save
  • Alexander, M. A., I. Bladé, M. Newman, J. R. Lanzante, N. C. Lau, and J. D. Scott, 2002: The atmospheric bridge: The influence of ENSO teleconnections on air–sea interaction over the global oceans. J. Climate, 15, 22052231.

    • Search Google Scholar
    • Export Citation
  • Boer, G. J., and B. Yu, 2003: Climate sensitivity and response. Climate Dyn., 20, 415429, doi:10.1007/s00382-002-0283-3.

  • Bony, S., and Coauthors, 2006: How well do we understand and evaluate climate change feedback processes? J. Climate, 19, 34453482.

  • Colman, R. A., 2003: A comparison of climate feedbacks in general circulation models. Climate Dyn., 20, 865873.

  • Colman, R. A., and S. B. Power, 2010: Atmospheric radiative feedbacks associated with transient climate change and climate variability. Climate Dyn., 34, 919933, doi:10.1007/s00382-009-0541-8.

    • Search Google Scholar
    • Export Citation
  • Crook, J. A., P. M. Forster, and N. Stuber, 2011: Spatial patterns of modeled climate feedback and contributions to temperature response and polar amplification. J. Climate, 24, 35753592.

    • Search Google Scholar
    • Export Citation
  • Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553597, doi:10.1002/qj.828.

    • Search Google Scholar
    • Export Citation
  • Dessler, A. E., 2010: A determination of the cloud feedback from climate variations over the past decade. Science, 330, 15231527, doi:10.1126/science.1192546.

    • Search Google Scholar
    • Export Citation
  • Dessler, A. E., and S. C. Sherwood, 2009: A matter of humidity. Science, 323, 10201021, doi:10.1126/Science.1171264.

  • Dessler, A. E., and S. Wong, 2009: Estimates of the water vapor climate feedback during the El Niño–Southern Oscillation. J. Climate, 22, 64046412.

    • Search Google Scholar
    • Export Citation
  • Dessler, A. E., P. Yang, and Z. Zhang, 2008: The water-vapor climate feedback inferred from climate fluctuations, 2003–2008. Geophys. Res. Lett., 35, L20704, doi:10.1029/2008GL035333.

    • Search Google Scholar
    • Export Citation
  • Dufresne, J. L., and S. Bony, 2008: An assessment of the primary sources of spread of global warming estimates from coupled atmosphere–ocean models. J. Climate, 21, 51355144.

    • Search Google Scholar
    • Export Citation
  • Forster, P. M. D., and M. Collins, 2004: Quantifying the water vapour feedback associated with post-Pinatubo global cooling. Climate Dyn., 23, 207214.

    • Search Google Scholar
    • Export Citation
  • Hartmann, D. L., and K. Larson, 2002: An important constraint on tropical cloud - climate feedback. Geophys. Res. Lett., 29, 1951, doi:10.1029/2002GL015835.

    • Search Google Scholar
    • Export Citation
  • Held, I., and K. M. Shell, 2012: Using relative humidity as a state variable in climate feedback analysis. J. Climate, 25, 25782582.

  • Ingram, W., 2010: A very simple model for the water vapour feedback on climate change. Quart. J. Roy. Meteor. Soc., 136, 3040, doi:10.1002/qj.546.

    • Search Google Scholar
    • Export Citation
  • Lin, J. L., 2007: Interdecadal variability of ENSO in 21 IPCC AR4 coupled GCMs. Geophys. Res. Lett., 34, L12702, doi:10.1029/2006GL028937.

    • Search Google Scholar
    • Export Citation
  • Lindzen, R. S., and Y.-S. Choi, 2009: On the determination of climate feedbacks from ERBE data. Geophys. Res. Lett., 36, L16705, doi:10.1029/2009gl039628.

    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., C. Covey, T. Delworth, M. Latif, B. McAvaney, J. F. B. Mitchell, R. J. Stouffer, and K. E. Taylor, 2007: The WCRP CMIP3 multimodel dataset: A new era in climate change research. Bull. Amer. Meteor. Soc., 88, 13831394.

    • Search Google Scholar
    • Export Citation
  • Minschwaner, K., and A. E. Dessler, 2004: Water vapor feedback in the tropical upper troposphere: Model results and observations. J. Climate, 17, 12721282.

    • Search Google Scholar
    • Export Citation
  • Ramaswamy, V., and Coauthors, 2001: Radiative forcing of climate change. Climate Change 2001: The Scientific Basis, J. T. Houghton et al., Eds., Cambridge University Press, 349–416.

  • Rienecker, M. M., and Coauthors, 2011: MERRA: NASA’s Modern-Era Retrospective Analysis for Research and Applications. J. Climate, 24, 36243648.

    • Search Google Scholar
    • Export Citation
  • Shell, K. M., J. T. Kiehl, and C. A. Shields, 2008: Using the radiative kernel technique to calculate climate feedbacks in NCAR’s Community Atmospheric Model. J. Climate, 21, 22692282.

    • Search Google Scholar
    • Export Citation
  • Soden, B. J., and I. M. Held, 2006: An assessment of climate feedbacks in coupled ocean–atmosphere models. J. Climate, 19, 33543360.

    • Search Google Scholar
    • Export Citation
  • Soden, B. J., I. M. Held, R. Colman, K. M. Shell, J. T. Kiehl, and C. A. Shields, 2008: Quantifying climate feedbacks using radiative kernels. J. Climate, 21, 35043520.

    • Search Google Scholar
    • Export Citation
  • Solomon, S., J. S. Daniel, R. R. Neely, J. P. Vernier, E. G. Dutton, and L. W. Thomason, 2011: The persistently variable “background” stratospheric aerosol layer and global climate change. Science, 333, 866870, doi:10.1126/science.1206027.

    • Search Google Scholar
    • Export Citation
  • Trenberth, K. E., and J. W. Hurrell, 1994: Decadal atmosphere–ocean variations in the Pacific. Climate Dyn., 9, 303319, doi:10.1007/bf00204745.

    • Search Google Scholar
    • Export Citation
  • Webb, M. J., and Coauthors, 2006: On the contribution of local feedback mechanisms to the range of climate sensitivity in two GCM ensembles. Climate Dyn., 27, 1738.

    • Search Google Scholar
    • Export Citation
  • Wielicki, B. A., B. R. Barkstrom, E. F. Harrison, R. B. Lee III, G. L. Smith, and J. E. Cooper, 1996: Clouds and the Earth’s Radiant Energy System (CERES): An Earth Observing System experiment. Bull. Amer. Meteor. Soc., 77, 853868.

    • Search Google Scholar
    • Export Citation
  • Zelinka, M., and D. L. Hartmann, 2010: Why is longwave cloud feedback positive? J. Geophys. Res., 115, D16117, doi:10.1029/2010JD013817.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Scatterplot of the temperature (ΔRT), water vapor (ΔRq), albedo (ΔRα), and cloud (ΔRcloud) flux anomalies vs surface temperature anomaly in the observations (using the ERA-Interim reanalysis). Also shown are a linear fit to the data and the 95% confidence intervals.

  • Fig. 2.

    The spatial distribution of (left) the temperature and (right) water vapor feedbacks for (top) ERA-Interim reanalysis, (middle) the control ensemble, and (bottom) the A1B ensemble. Red (blue) indicates a local positive (negative) feedback. See text for details on how these plots are constructed. Note that the color scale varies among the panels.

  • Fig. 3.

    The zonal average temperature (bottom curves) and water vapor feedbacks (top curves). Observations are the solid lines (black is ERA-Interim and red is MERRA) and the models are dashed (black dashed is the control ensemble and red dashed is the A1B ensemble). The shading indicates one standard deviation about the average of the control ensemble. Error bars indicate the 2σ uncertainty of the fit for the ERA-Interim calculation at selected latitudes.

  • Fig. 4.

    The zonal average Planck–RH, lapse-rate–RH, and ΔRH feedbacks (these are from an alternative decomposition of the feedbacks in which the Planck and lapse-rate feedbacks also include changes in water vapor needed to maintain constant RH). Observations are the solid lines (black is ERA-Interim and red is MERRA) and the models are dashed (black dashed is the control ensemble and red dashed is the A1B ensemble). The shading indicates one standard deviation about the average of the control ensemble. Error bars indicate the 2σ uncertainty of the fit for the ERA-Interim calculation at selected latitudes.

  • Fig. 5.

    (left) Longwave cloud feedback, (middle) shortwave cloud feedback, and (right) net cloud feedbacks, for (top) ERA-Interim reanalysis combined with CERES measurements of all-sky flux, (middle) the control ensemble, and (bottom) the A1B ensemble. In all panels, the units are W m−2 K−1. Note that the color scale varies among the panels.

  • Fig. 6.

    The zonal average cloud feedbacks: (left to right) longwave, shortwave, and total radiation. Observations are the solid lines (black is ERA-Interim and red is MERRA) and the models are dashed (black dashed is the control ensemble and red dashed is the A1B ensemble). The shading indicates one standard deviation about the average of the control ensemble. Error bars indicate the 2σ uncertainty of the fit for the ERA-Interim calculation at selected latitudes.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1870 559 27
PDF Downloads 1292 410 24