Search Results

You are looking at 11 - 20 of 37 items for

  • Author or Editor: David B. Stephenson x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
Mark P. Baldwin
,
David B. Stephenson
, and
Ian T. Jolliffe

Abstract

Often there is a need to consider spatial weighting in methods for finding spatial patterns in climate data. The focus of this paper is on techniques that maximize variance, such as empirical orthogonal functions (EOFs). A weighting matrix is introduced into a generalized framework for dealing with spatial weighting. One basic principal in the design of the weighting matrix is that the resulting spatial patterns are independent of the grid used to represent the data. A weighting matrix can also be used for other purposes, such as to compensate for the neglect of unrepresented subgrid-scale variance or, in the form of a prewhitening filter, to maximize the signal-to-noise ratio of EOFs. The new methodology is applicable to other types of climate pattern analysis, such as extended EOF analysis and maximum covariance analysis. The increasing availability of large datasets of three-dimensional gridded variables (e.g., reanalysis products and model output) raises special issues for data-reduction methods such as EOFs. Fast, memory-efficient methods are required in order to extract leading EOFs from such large datasets. This study proposes one such approach based on a simple iteration of successive projections of the data onto time series and spatial maps. It is also demonstrated that spatial weighting can be combined with the iterative methods. Throughout the paper, multivariate statistics notation is used, simplifying implementation as matrix commands in high-level computing languages.

Full access
Maarten H. P. Ambaum
,
Brian J. Hoskins
, and
David B. Stephenson

Abstract

The definition and interpretation of the Arctic oscillation (AO) are examined and compared with those of the North Atlantic oscillation (NAO). It is shown that the NAO reflects the correlations between the surface pressure variability at its centers of action, whereas this is not the case for the AO. The NAO pattern can be identified in a physically consistent way in principal component analysis applied to various fields in the Euro-Atlantic region. A similar identification is found in the Pacific region for the Pacific–North American (PNA) pattern, but no such identification is found here for the AO. The AO does reflect the tendency for the zonal winds at 35° and 55°N to anticorrelate in both the Atlantic and Pacific regions associated with the NAO and PNA. Because climatological features in the two ocean basins are at different latitudes, the zonally symmetric nature of the AO does not mean that it represents a simple modulation of the circumpolar flow. An increase in the AO or NAO implies strong, separated tropospheric jets in the Atlantic but a weakened Pacific jet. The PNA has strong related variability in the Pacific jet exit, but elsewhere the zonal wind is similar to that related to the NAO. The NAO-related zonal winds link strongly through to the stratosphere in the Atlantic sector. The PNA-related winds do so in the Pacific, but to a lesser extent. The results suggest that the NAO paradigm may be more physically relevant and robust for Northern Hemisphere variability than is the AO paradigm. However, this does not disqualify many of the physical mechanisms associated with annular modes for explaining the existence of the NAO.

Full access
Timothy J. Mosedale
,
David B. Stephenson
, and
Matthew Collins

Abstract

A simple linear stochastic climate model of extratropical wintertime ocean–atmosphere coupling is used to diagnose the daily interactions between the ocean and the atmosphere in a fully coupled general circulation model. Monte Carlo simulations with the simple model show that the influence of the ocean on the atmosphere can be difficult to estimate, being biased low even with multiple decades of daily data. Despite this, fitting the simple model to the surface air temperature and sea surface temperature data from the complex general circulation model reveals an ocean-to-atmosphere influence in the northeastern Atlantic. Furthermore, the simple model is used to demonstrate that the ocean in this region greatly enhances the autocorrelation in overlying lower-tropospheric temperatures at lags from a few days to many months.

Full access
Maarten H. P. Ambaum
,
Brian J. Hoskins
, and
David B. Stephenson
Full access
Donald P. Cummins
,
David B. Stephenson
, and
Peter A. Stott

Abstract

This study has developed a rigorous and efficient maximum likelihood method for estimating the parameters in stochastic energy balance models (with any k > 0 number of boxes) given time series of surface temperature and top-of-the-atmosphere net downward radiative flux. The method works by finding a state-space representation of the linear dynamic system and evaluating the likelihood recursively via the Kalman filter. Confidence intervals for estimated parameters are straightforward to construct in the maximum likelihood framework, and information criteria may be used to choose an optimal number of boxes for parsimonious k-box emulation of atmosphere–ocean general circulation models (AOGCMs). In addition to estimating model parameters the method enables hidden state estimation for the unobservable boxes corresponding to the deep ocean, and also enables noise filtering for observations of surface temperature. The feasibility, reliability, and performance of the proposed method are demonstrated in a simulation study. To obtain a set of optimal k-box emulators, models are fitted to the 4 × CO2 step responses of 16 AOGCMs in CMIP5. It is found that for all 16 AOGCMs three boxes are required for optimal k-box emulation. The number of boxes k is found to influence, sometimes strongly, the impulse responses of the fitted models.

Free access
Fotis Panagiotopoulos
,
Maria Shahgedanova
,
Abdelwaheb Hannachi
, and
David B. Stephenson

Abstract

This study investigates variability in the intensity of the wintertime Siberian high (SH) by defining a robust SH index (SHI) and correlating it with selected meteorological fields and teleconnection indices. A dramatic trend of –2.5 hPa decade−1 has been found in the SHI between 1978 and 2001 with unprecedented (since 1871) low values of the SHI. The weakening of the SH has been confirmed by analyzing different historical gridded analyses and individual station observations of sea level pressure (SLP) and excluding possible effects from the conversion of surface pressure to SLP.

SHI correlation maps with various meteorological fields show that SH impacts on circulation and temperature patterns extend far outside the SH source area extending from the Arctic to the tropical Pacific. Advection of warm air from eastern Europe has been identified as the main mechanism causing milder than normal conditions over the Kara and Laptev Seas in association with a strong SH. Despite the strong impacts of the variability in the SH on climatic variability across the Northern Hemisphere, correlations between the SHI and the main teleconnection indices of the Northern Hemisphere are weak. Regression analysis has shown that teleconnection indices are not able to reproduce the interannual variability and trends in the SH. The inclusion of regional surface temperature in the regression model provides closer agreement between the original and reconstructed SHI.

Full access
Christopher A. T. Ferro
,
Abdelwaheb Hannachi
, and
David B. Stephenson

Abstract

Anthropogenic influences are expected to cause the probability distribution of weather variables to change in nontrivial ways. This study presents simple nonparametric methods for exploring and comparing differences in pairs of probability distribution functions. The methods are based on quantiles and allow changes in all parts of the probability distribution to be investigated, including the extreme tails. Adjusted quantiles are used to investigate whether changes are simply due to shifts in location (e.g., mean) and/or scale (e.g., variance). Sampling uncertainty in the quantile differences is assessed using simultaneous confidence intervals calculated using a bootstrap resampling method that takes account of serial (intraseasonal) dependency. The methods are simple enough to be used on large gridded datasets. They are demonstrated here by exploring the changes between European regional climate model simulations of daily minimum temperature and precipitation totals for winters in 1961–90 and 2071–2100. Projected changes in daily precipitation are generally found to be well described by simple increases in scale, whereas minimum temperature exhibits changes in both location and scale.

Full access
Cristina Primo
,
Christopher A. T. Ferro
,
Ian T. Jolliffe
, and
David B. Stephenson

Abstract

Probabilistic forecasts of atmospheric variables are often given as relative frequencies obtained from ensembles of deterministic forecasts. The detrimental effects of imperfect models and initial conditions on the quality of such forecasts can be mitigated by calibration. This paper shows that Bayesian methods currently used to incorporate prior information can be written as special cases of a beta-binomial model and correspond to a linear calibration of the relative frequencies. These methods are compared with a nonlinear calibration technique (i.e., logistic regression) using real precipitation forecasts. Calibration is found to be advantageous in all cases considered, and logistic regression is preferable to linear methods.

Full access
Pascal J. Mailier
,
David B. Stephenson
,
Christopher A. T. Ferro
, and
Kevin I. Hodges

Abstract

The clustering in time (seriality) of extratropical cyclones is responsible for large cumulative insured losses in western Europe, though surprisingly little scientific attention has been given to this important property. This study investigates and quantifies the seriality of extratropical cyclones in the Northern Hemisphere using a point-process approach. A possible mechanism for serial clustering is the time-varying effect of the large-scale flow on individual cyclone tracks. Another mechanism is the generation by one “parent” cyclone of one or more “offspring” through secondary cyclogenesis. A long cyclone-track database was constructed for extended October–March winters from 1950 to 2003 using 6-h analyses of 850-mb relative vorticity derived from the NCEP–NCAR reanalysis. A dispersion statistic based on the variance-to-mean ratio of monthly cyclone counts was used as a measure of clustering. It reveals extensive regions of statistically significant clustering in the European exit region of the North Atlantic storm track and over the central North Pacific. Monthly cyclone counts were regressed on time-varying teleconnection indices with a log-linear Poisson model. Five independent teleconnection patterns were found to be significant factors over Europe: the North Atlantic Oscillation (NAO), the east Atlantic pattern, the Scandinavian pattern, the east Atlantic–western Russian pattern, and the polar–Eurasian pattern. The NAO alone is not sufficient for explaining the variability of cyclone counts in the North Atlantic region and western Europe. Rate dependence on time-varying teleconnection indices accounts for the variability in monthly cyclone counts, and a cluster process did not need to be invoked.

Full access
Robin J. Hogan
,
Christopher A. T. Ferro
,
Ian T. Jolliffe
, and
David B. Stephenson

Abstract

In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.

Full access