Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: Philip G. Sansom x
  • All content x
Clear All Modify Search
Daniel B. Williamson and Philip G. Sansom

Abstract

The use of emergent constraints to quantify uncertainty for policy-relevant quantities such as equilibrium climate sensitivity (ECS) has become increasingly widespread in recent years. Many researchers, however, claim that emergent constraints are inappropriate or even underreport uncertainty. In this paper we contribute to this discussion by examining the emergent constraints methodology in terms of its underpinning statistical assumptions. We argue that the statistical assumptions required to underpin existing frameworks are strong, hard to defend, and lead to an underreporting of uncertainty. We show how weakening them leads to a more transparent Bayesian framework wherein hitherto-ignored sources of uncertainty, such as how reality might differ from models, can be quantified. We present a guided framework for the quantification of additional uncertainties that is linked to the confidence we can have in the underpinning physical arguments for using linear constraints. We provide a software tool for implementing our framework for emergent constraints and use it to illustrate the methods on a number of recent emergent constraints for ECS. We find that the robustness of any constraint to additional uncertainties depends strongly on the confidence we have in the underpinning physics, allowing a future framing of the debate over the validity of a particular constraint around underlying physical arguments, rather than statistical assumptions. We also find that when physical arguments lead to confidence in the linear relationships underpinning emergent constraints, prediction intervals are only slightly widened by including additional uncertainties, and they show this across a range of emergent constraints for ECS.

Free access
Stefan Siegert, David B. Stephenson, Philip G. Sansom, Adam A. Scaife, Rosie Eade, and Alberto Arribas

Abstract

Predictability estimates of ensemble prediction systems are uncertain because of limited numbers of past forecasts and observations. To account for such uncertainty, this paper proposes a Bayesian inferential framework that provides a simple 6-parameter representation of ensemble forecasting systems and the corresponding observations. The framework is probabilistic and thus allows for quantifying uncertainty in predictability measures, such as correlation skill and signal-to-noise ratios. It also provides a natural way to produce recalibrated probabilistic predictions from uncalibrated ensembles forecasts.

The framework is used to address important questions concerning the skill of winter hindcasts of the North Atlantic Oscillation for 1992–2011 issued by the Met Office Global Seasonal Forecast System, version 5 (GloSea5), climate prediction system. Although there is much uncertainty in the correlation between ensemble mean and observations, there is strong evidence of skill: the 95% credible interval of the correlation coefficient of [0.19, 0.68] does not overlap zero. There is also strong evidence that the forecasts are not exchangeable with the observations: with over 99% certainty, the signal-to-noise ratio of the forecasts is smaller than the signal-to-noise ratio of the observations, which suggests that raw forecasts should not be taken as representative scenarios of the observations. Forecast recalibration is thus required, which can be coherently addressed within the proposed framework.

Full access
Philip G. Sansom, Christopher A. T. Ferro, David B. Stephenson, Lisa Goddard, and Simon J. Mason

Abstract

This study describes a systematic approach to selecting optimal statistical recalibration methods and hindcast designs for producing reliable probability forecasts on seasonal-to-decadal time scales. A new recalibration method is introduced that includes adjustments for both unconditional and conditional biases in the mean and variance of the forecast distribution and linear time-dependent bias in the mean. The complexity of the recalibration can be systematically varied by restricting the parameters. Simple recalibration methods may outperform more complex ones given limited training data. A new cross-validation methodology is proposed that allows the comparison of multiple recalibration methods and varying training periods using limited data.

Part I considers the effect on forecast skill of varying the recalibration complexity and training period length. The interaction between these factors is analyzed for gridbox forecasts of annual mean near-surface temperature from the CanCM4 model. Recalibration methods that include conditional adjustment of the ensemble mean outperform simple bias correction by issuing climatological forecasts where the model has limited skill. Trend-adjusted forecasts outperform forecasts without trend adjustment at almost 75% of grid boxes. The optimal training period is around 30 yr for trend-adjusted forecasts and around 15 yr otherwise. The optimal training period is strongly related to the length of the optimal climatology. Longer training periods may increase overall performance but at the expense of very poor forecasts where skill is limited.

Full access
Philip G. Sansom, David B. Stephenson, Christopher A. T. Ferro, Giuseppe Zappa, and Len Shaffrey

Abstract

Future climate change projections are often derived from ensembles of simulations from multiple global circulation models using heuristic weighting schemes. This study provides a more rigorous justification for this by introducing a nested family of three simple analysis of variance frameworks. Statistical frameworks are essential in order to quantify the uncertainty associated with the estimate of the mean climate change response.

The most general framework yields the “one model, one vote” weighting scheme often used in climate projection. However, a simpler additive framework is found to be preferable when the climate change response is not strongly model dependent. In such situations, the weighted multimodel mean may be interpreted as an estimate of the actual climate response, even in the presence of shared model biases.

Statistical significance tests are derived to choose the most appropriate framework for specific multimodel ensemble data. The framework assumptions are explicit and can be checked using simple tests and graphical techniques. The frameworks can be used to test for evidence of nonzero climate response and to construct confidence intervals for the size of the response.

The methodology is illustrated by application to North Atlantic storm track data from the Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel ensemble. Despite large variations in the historical storm tracks, the cyclone frequency climate change response is not found to be model dependent over most of the region. This gives high confidence in the response estimates. Statistically significant decreases in cyclone frequency are found on the flanks of the North Atlantic storm track and in the Mediterranean basin.

Full access
Alan J. Hewitt, Ben B. B. Booth, Chris D. Jones, Eddy S. Robertson, Andy J. Wiltshire, Philip G. Sansom, David B. Stephenson, and Stan Yip

Abstract

The inclusion of carbon cycle processes within CMIP5 Earth system models provides the opportunity to explore the relative importance of differences in scenario and climate model representation to future land and ocean carbon fluxes. A two-way analysis of variance (ANOVA) approach was used to quantify the variability owing to differences between scenarios and between climate models at different lead times. For global ocean carbon fluxes, the variance attributed to differences between representative concentration pathway scenarios exceeds the variance attributed to differences between climate models by around 2025, completely dominating by 2100. This contrasts with global land carbon fluxes, where the variance attributed to differences between climate models continues to dominate beyond 2100. This suggests that modeled processes that determine ocean fluxes are currently better constrained than those of land fluxes; thus, one can be more confident in linking different future socioeconomic pathways to consequences of ocean carbon uptake than for land carbon uptake. The contribution of internal variance is negligible for ocean fluxes and small for land fluxes, indicating that there is little dependence on the initial conditions. The apparent agreement in atmosphere–ocean carbon fluxes, globally, masks strong climate model differences at a regional level. The North Atlantic and Southern Ocean are key regions, where differences in modeled processes represent an important source of variability in projected regional fluxes.

Full access