Search Results

You are looking at 21 - 30 of 83 items for

  • Author or Editor: Michael K. Tippett x
  • Refine by Access: All Content x
Clear All Modify Search
Timothy DelSole
,
Xiaoqin Yan
, and
Michael K. Tippett

Abstract

Hydrological sensitivity is the change in global-mean precipitation per degree of global-mean temperature change. This paper shows that the hydrological sensitivity of the response to anthropogenic aerosol forcing is distinct from that of the combined response to all other forcings and that this difference is sufficient to infer the associated cooling in global-mean temperature. This result is demonstrated using temperature and precipitation data generated by climate models and is robust across different climate models. Remarkably, greenhouse gas warming and aerosol cooling can be estimated in a model without using any spatial or temporal gradient information in the response, provided temperature data are augmented by precipitation data. Over the late twentieth century, the hydrological sensitivities of climate models differ significantly from that of observations. Whether this discrepancy can be attributed to observational error, which is substantial as different estimates of global-mean precipitation are not even significantly correlated with each other, or to model error is unclear. The results highlight the urgency to construct accurate estimates of global precipitation from past observations and for reducing model uncertainty in hydrological sensitivity. This paper also clarifies that previous estimates of hydrological sensitivity are limited in that standard regression methods neglect temperature–precipitation relations that occur through internal variability. An alternative method for estimating hydrological sensitivity that overcomes this limitation is presented.

Full access
Xiaoqin Yan
,
Timothy DelSole
, and
Michael K. Tippett

Abstract

This paper shows that joint temperature–precipitation information over a global domain provides a more accurate estimate of aerosol forced responses in climate models than does any other combination of temperature, precipitation, or sea level pressure. This fact is demonstrated using a new quantity called potential detectability, which measures the extent to which a forced response can be detected in a model. In particular, this measure can be evaluated independently of observations and therefore permits efficient exploration of a large number of variable combinations before performing optimal fingerprinting on observations. This paper also shows that the response to anthropogenic aerosol forcing can be separated from that of other forcings using only spatial structure alone, leaving the time variation of the response to be inferred from data, thereby demonstrating that temporal information is not necessary for detection. The spatial structure of the forced response is derived by maximizing the signal-to-noise ratio. For single variables, the north–south hemispheric gradient and equator-to-pole latitudinal gradient are important spatial structures for detecting anthropogenic aerosols in some models but not all. Sea level pressure is not an independent detection variable because it is derived partly from surface temperature. In no case does sea level pressure significantly enhance potential detectability beyond that already possible using surface temperature. Including seasonal or land–sea contrast information does not significantly enhance detectability of anthropogenic aerosol responses relative to annual means over global domains.

Full access
Timothy DelSole
,
Liwei Jia
, and
Michael K. Tippett

Abstract

This paper proposes a new approach to linearly combining multimodel forecasts, called scale-selective ridge regression, which ensures that the weighting coefficients satisfy certain smoothness constraints. The smoothness constraint reflects the “prior assumption” that seasonally predictable patterns tend to be large scale. In the absence of a smoothness constraint, regression methods typically produce noisy weights and hence noisy predictions. Constraining the weights to be smooth ensures that the multimodel combination is no less smooth than the individual model forecasts. The proposed method is equivalent to minimizing a cost function comprising the familiar mean square error plus a “penalty function” that penalizes weights with large spatial gradients. The method reduces to pointwise ridge regression for a suitable choice of constraint. The method is tested using the Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) hindcast dataset during 1960–2005. The cross-validated skill of the proposed forecast method is shown to be larger than the skill of either ordinary least squares or pointwise ridge regression, although the significance of this difference is difficult to test owing to the small sample size. The model weights derived from the method are much smoother than those obtained from ordinary least squares or pointwise ridge regression. Interestingly, regressions in which the weights are completely independent of space give comparable overall skill. The scale-selective ridge is numerically more intensive than pointwise methods since the solution requires solving equations that couple all grid points together.

Full access
Timothy DelSole
,
Michael K. Tippett
, and
Jagadish Shukla

Abstract

The problem of separating variations due to natural and anthropogenic forcing from those due to unforced internal dynamics during the twentieth century is addressed using state-of-the-art climate simulations and observations. An unforced internal component that varies on multidecadal time scales is identified by a new statistical method that maximizes integral time scale. This component, called the internal multidecadal pattern (IMP), is stochastic and hence does not contribute to trends on long time scales; however, it can contribute significantly to short-term trends. Observational estimates indicate that the trend in the spatially averaged “well observed” sea surface temperature (SST) due to the forced component has an approximately constant value of 0.1 K decade−1, while the IMP can contribute about ±0.08 K decade−1 for a 30-yr trend. The warming and cooling of the IMP matches that of the Atlantic multidecadal oscillation and is of sufficient amplitude to explain the acceleration in warming during 1977–2008 as compared to 1946–77, despite the forced component increasing at the same rate during these two periods. The amplitude and time scale of the IMP are such that its contribution to the trend dominates that of the forced component on time scales shorter than 16 yr, implying that the lack of warming trend during the past 10 yr is not statistically significant. Furthermore, since the IMP varies naturally on multidecadal time scales, it is potentially predictable on decadal time scales, providing a scientific rationale for decadal predictions. While the IMP can contribute significantly to trends for periods of 30 yr or shorter, it cannot account for the 0.8°C warming that has been observed in the twentieth-century spatially averaged SST.

Full access
Craig H. Bishop
,
Carolyn A. Reynolds
, and
Michael K. Tippett

Abstract

An exact closed form expression for the infinite time analysis and forecast error covariances of a Kalman filter is used to investigate how the locations of fixed observing platforms such as radiosonde stations affect global distributions of analysis and forecast error variance. The solution pertains to a system with no model error, time-independent nondefective unstable dynamics, time-independent observation operator, and time-independent observation error covariance. As far as the authors are aware, the solutions are new. It is shown that only nondecaying normal modes (eigenvectors of the dynamics operator) are required to represent the infinite time error covariance matrices. Consequently, once a complete set of nondecaying eigenvectors has been obtained, the solution allows for the rapid assessment of the error-reducing potential of any observational network that bounds error variance.

Atmospherically relevant time-independent basic states and their corresponding tangent linear propagators are obtained with the help of a (T21L3) quasigeostrophic global model. The closed form solution allows for an examination of the sensitivity of the error variances to many different observing configurations. It is also feasible to determine the optimal location of one additional observation given a fixed observing network, which, through repetition, can be used to build effective observing networks.

Effective observing networks result in error variances several times smaller than other types of networks with the same number of column observations, such as equally spaced or land-based networks. The impact of the observing network configuration on global error variance is greater when the observing network is less dense. The impact of observations at different pressure levels is also examined. It is found that upper-level observations are more effective at reducing globally averaged error variance, but midlevel observations are more effective at reducing forecast error variance at and downstream of the baroclinic regions associated with midlatitude jets.

Full access
Michael K. Tippett
,
Lisa Goddard
, and
Anthony G. Barnston

Abstract

Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the region’s interannual precipitation variability. The statistical–dynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indo–west Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statistical–dynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier December–March precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statistical–dynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03.

May Kabul be without gold, but not without snow.

—Traditional Afghan proverb

Full access
Michael K. Tippett
,
Anthony G. Barnston
, and
Andrew W. Robertson

Abstract

Ensemble simulations and forecasts provide probabilistic information about the inherently uncertain climate system. Counting the number of ensemble members in a category is a simple nonparametric method of using an ensemble to assign categorical probabilities. Parametric methods of assigning quantile-based categorical probabilities include distribution fitting and generalized linear regression. Here the accuracy of counting and parametric estimates of tercile category probabilities is compared. The methods are first compared in an idealized setting where analytical results show how ensemble size and level of predictability control the accuracy of both methods. The authors also show how categorical probability estimate errors degrade the rank probability skill score. The analytical results provide a good description of the behavior of the methods applied to seasonal precipitation from a 53-yr, 79-member ensemble of general circulation model simulations. Parametric estimates of seasonal precipitation tercile category probabilities are generally more accurate than the counting estimate. In addition to determining the relative accuracies of the different methods, the analysis quantifies the relative importance of the ensemble mean and variance in determining tercile probabilities. Ensemble variance is shown to be a weak factor in determining seasonal precipitation probabilities, meaning that differences between the tercile probabilities and the equal-odds probabilities are due mainly to shifts of the forecast mean away from its climatological value.

Full access
Michael K. Tippett
,
Suzana J. Camargo
, and
Adam H. Sobel

Abstract

A Poisson regression between the observed climatology of tropical cyclogenesis (TCG) and large-scale climate variables is used to construct a TCG index. The regression methodology is objective and provides a framework for the selection of the climate variables in the index. Broadly following earlier work, four climate variables appear in the index: low-level absolute vorticity, relative humidity, relative sea surface temperature (SST), and vertical shear. Several variants in the choice of predictors are explored, including relative SST versus potential intensity and satellite-based column-integrated relative humidity versus reanalysis relative humidity at a single level; these choices lead to modest differences in the performance of the index. The feature of the new index that leads to the greatest improvement is a functional dependence on low-level absolute vorticity that causes the index response to absolute vorticity to saturate when absolute vorticity exceeds a threshold. This feature reduces some biases of the index and improves the fidelity of its spatial distribution. Physically, this result suggests that once low-level environmental vorticity reaches a sufficiently large value, other factors become rate limiting so that further increases in vorticity (at least on a monthly mean basis) do not increase the probability of genesis.

Although the index is fit to climatological data, it reproduces some aspects of interannual variability when applied to interannually varying data. Overall, the new index compares positively to the genesis potential index (GPI), whose derivation, computation, and analysis is more complex in part because of its dependence on potential intensity.

Full access
Timothy DelSole
,
Laurie Trenary
,
Michael K. Tippett
, and
Kathleen Pegion

Abstract

This paper demonstrates that an operational forecast model can skillfully predict week-3–4 averages of temperature and precipitation over the contiguous United States. This skill is demonstrated at the gridpoint level (about 1° × 1°) by decomposing temperature and precipitation anomalies in terms of an orthogonal set of patterns that can be ordered by a measure of length scale and then showing that many of the resulting components are predictable and can be predicted in observations with statistically significant skill. The statistical significance of predictability and skill are assessed using a permutation test that accounts for serial correlation. Skill is detected based on correlation measures but not based on mean square error measures, indicating that an amplitude correction is necessary for skill. The statistical characteristics of predictability are further clarified by finding linear combinations of components that maximize predictability. The forecast model analyzed here is version 2 of the Climate Forecast System (CFSv2), and the variables considered are temperature and precipitation over the contiguous United States during January and July. A 4-day lagged ensemble, comprising 16 ensemble members, is used. The most predictable components of winter temperature and precipitation are related to ENSO, and other predictable components of winter precipitation are shown to be related to the Madden–Julian oscillation. These results establish a scientific basis for making week-3–4 weather and climate predictions.

Full access
Michelle L. L’Heureux
,
Michael K. Tippett
, and
Anthony G. Barnston
Full access