Search Results
You are looking at 11 - 20 of 37 items for
- Author or Editor: David B. Stephenson x
- Refine by Access: All Content x
Abstract
Often there is a need to consider spatial weighting in methods for finding spatial patterns in climate data. The focus of this paper is on techniques that maximize variance, such as empirical orthogonal functions (EOFs). A weighting matrix is introduced into a generalized framework for dealing with spatial weighting. One basic principal in the design of the weighting matrix is that the resulting spatial patterns are independent of the grid used to represent the data. A weighting matrix can also be used for other purposes, such as to compensate for the neglect of unrepresented subgrid-scale variance or, in the form of a prewhitening filter, to maximize the signal-to-noise ratio of EOFs. The new methodology is applicable to other types of climate pattern analysis, such as extended EOF analysis and maximum covariance analysis. The increasing availability of large datasets of three-dimensional gridded variables (e.g., reanalysis products and model output) raises special issues for data-reduction methods such as EOFs. Fast, memory-efficient methods are required in order to extract leading EOFs from such large datasets. This study proposes one such approach based on a simple iteration of successive projections of the data onto time series and spatial maps. It is also demonstrated that spatial weighting can be combined with the iterative methods. Throughout the paper, multivariate statistics notation is used, simplifying implementation as matrix commands in high-level computing languages.
Abstract
Often there is a need to consider spatial weighting in methods for finding spatial patterns in climate data. The focus of this paper is on techniques that maximize variance, such as empirical orthogonal functions (EOFs). A weighting matrix is introduced into a generalized framework for dealing with spatial weighting. One basic principal in the design of the weighting matrix is that the resulting spatial patterns are independent of the grid used to represent the data. A weighting matrix can also be used for other purposes, such as to compensate for the neglect of unrepresented subgrid-scale variance or, in the form of a prewhitening filter, to maximize the signal-to-noise ratio of EOFs. The new methodology is applicable to other types of climate pattern analysis, such as extended EOF analysis and maximum covariance analysis. The increasing availability of large datasets of three-dimensional gridded variables (e.g., reanalysis products and model output) raises special issues for data-reduction methods such as EOFs. Fast, memory-efficient methods are required in order to extract leading EOFs from such large datasets. This study proposes one such approach based on a simple iteration of successive projections of the data onto time series and spatial maps. It is also demonstrated that spatial weighting can be combined with the iterative methods. Throughout the paper, multivariate statistics notation is used, simplifying implementation as matrix commands in high-level computing languages.
Abstract
The definition and interpretation of the Arctic oscillation (AO) are examined and compared with those of the North Atlantic oscillation (NAO). It is shown that the NAO reflects the correlations between the surface pressure variability at its centers of action, whereas this is not the case for the AO. The NAO pattern can be identified in a physically consistent way in principal component analysis applied to various fields in the Euro-Atlantic region. A similar identification is found in the Pacific region for the Pacific–North American (PNA) pattern, but no such identification is found here for the AO. The AO does reflect the tendency for the zonal winds at 35° and 55°N to anticorrelate in both the Atlantic and Pacific regions associated with the NAO and PNA. Because climatological features in the two ocean basins are at different latitudes, the zonally symmetric nature of the AO does not mean that it represents a simple modulation of the circumpolar flow. An increase in the AO or NAO implies strong, separated tropospheric jets in the Atlantic but a weakened Pacific jet. The PNA has strong related variability in the Pacific jet exit, but elsewhere the zonal wind is similar to that related to the NAO. The NAO-related zonal winds link strongly through to the stratosphere in the Atlantic sector. The PNA-related winds do so in the Pacific, but to a lesser extent. The results suggest that the NAO paradigm may be more physically relevant and robust for Northern Hemisphere variability than is the AO paradigm. However, this does not disqualify many of the physical mechanisms associated with annular modes for explaining the existence of the NAO.
Abstract
The definition and interpretation of the Arctic oscillation (AO) are examined and compared with those of the North Atlantic oscillation (NAO). It is shown that the NAO reflects the correlations between the surface pressure variability at its centers of action, whereas this is not the case for the AO. The NAO pattern can be identified in a physically consistent way in principal component analysis applied to various fields in the Euro-Atlantic region. A similar identification is found in the Pacific region for the Pacific–North American (PNA) pattern, but no such identification is found here for the AO. The AO does reflect the tendency for the zonal winds at 35° and 55°N to anticorrelate in both the Atlantic and Pacific regions associated with the NAO and PNA. Because climatological features in the two ocean basins are at different latitudes, the zonally symmetric nature of the AO does not mean that it represents a simple modulation of the circumpolar flow. An increase in the AO or NAO implies strong, separated tropospheric jets in the Atlantic but a weakened Pacific jet. The PNA has strong related variability in the Pacific jet exit, but elsewhere the zonal wind is similar to that related to the NAO. The NAO-related zonal winds link strongly through to the stratosphere in the Atlantic sector. The PNA-related winds do so in the Pacific, but to a lesser extent. The results suggest that the NAO paradigm may be more physically relevant and robust for Northern Hemisphere variability than is the AO paradigm. However, this does not disqualify many of the physical mechanisms associated with annular modes for explaining the existence of the NAO.
Abstract
A simple linear stochastic climate model of extratropical wintertime ocean–atmosphere coupling is used to diagnose the daily interactions between the ocean and the atmosphere in a fully coupled general circulation model. Monte Carlo simulations with the simple model show that the influence of the ocean on the atmosphere can be difficult to estimate, being biased low even with multiple decades of daily data. Despite this, fitting the simple model to the surface air temperature and sea surface temperature data from the complex general circulation model reveals an ocean-to-atmosphere influence in the northeastern Atlantic. Furthermore, the simple model is used to demonstrate that the ocean in this region greatly enhances the autocorrelation in overlying lower-tropospheric temperatures at lags from a few days to many months.
Abstract
A simple linear stochastic climate model of extratropical wintertime ocean–atmosphere coupling is used to diagnose the daily interactions between the ocean and the atmosphere in a fully coupled general circulation model. Monte Carlo simulations with the simple model show that the influence of the ocean on the atmosphere can be difficult to estimate, being biased low even with multiple decades of daily data. Despite this, fitting the simple model to the surface air temperature and sea surface temperature data from the complex general circulation model reveals an ocean-to-atmosphere influence in the northeastern Atlantic. Furthermore, the simple model is used to demonstrate that the ocean in this region greatly enhances the autocorrelation in overlying lower-tropospheric temperatures at lags from a few days to many months.
Abstract
This study has developed a rigorous and efficient maximum likelihood method for estimating the parameters in stochastic energy balance models (with any k > 0 number of boxes) given time series of surface temperature and top-of-the-atmosphere net downward radiative flux. The method works by finding a state-space representation of the linear dynamic system and evaluating the likelihood recursively via the Kalman filter. Confidence intervals for estimated parameters are straightforward to construct in the maximum likelihood framework, and information criteria may be used to choose an optimal number of boxes for parsimonious k-box emulation of atmosphere–ocean general circulation models (AOGCMs). In addition to estimating model parameters the method enables hidden state estimation for the unobservable boxes corresponding to the deep ocean, and also enables noise filtering for observations of surface temperature. The feasibility, reliability, and performance of the proposed method are demonstrated in a simulation study. To obtain a set of optimal k-box emulators, models are fitted to the 4 × CO2 step responses of 16 AOGCMs in CMIP5. It is found that for all 16 AOGCMs three boxes are required for optimal k-box emulation. The number of boxes k is found to influence, sometimes strongly, the impulse responses of the fitted models.
Abstract
This study has developed a rigorous and efficient maximum likelihood method for estimating the parameters in stochastic energy balance models (with any k > 0 number of boxes) given time series of surface temperature and top-of-the-atmosphere net downward radiative flux. The method works by finding a state-space representation of the linear dynamic system and evaluating the likelihood recursively via the Kalman filter. Confidence intervals for estimated parameters are straightforward to construct in the maximum likelihood framework, and information criteria may be used to choose an optimal number of boxes for parsimonious k-box emulation of atmosphere–ocean general circulation models (AOGCMs). In addition to estimating model parameters the method enables hidden state estimation for the unobservable boxes corresponding to the deep ocean, and also enables noise filtering for observations of surface temperature. The feasibility, reliability, and performance of the proposed method are demonstrated in a simulation study. To obtain a set of optimal k-box emulators, models are fitted to the 4 × CO2 step responses of 16 AOGCMs in CMIP5. It is found that for all 16 AOGCMs three boxes are required for optimal k-box emulation. The number of boxes k is found to influence, sometimes strongly, the impulse responses of the fitted models.
Abstract
This study investigates variability in the intensity of the wintertime Siberian high (SH) by defining a robust SH index (SHI) and correlating it with selected meteorological fields and teleconnection indices. A dramatic trend of –2.5 hPa decade−1 has been found in the SHI between 1978 and 2001 with unprecedented (since 1871) low values of the SHI. The weakening of the SH has been confirmed by analyzing different historical gridded analyses and individual station observations of sea level pressure (SLP) and excluding possible effects from the conversion of surface pressure to SLP.
SHI correlation maps with various meteorological fields show that SH impacts on circulation and temperature patterns extend far outside the SH source area extending from the Arctic to the tropical Pacific. Advection of warm air from eastern Europe has been identified as the main mechanism causing milder than normal conditions over the Kara and Laptev Seas in association with a strong SH. Despite the strong impacts of the variability in the SH on climatic variability across the Northern Hemisphere, correlations between the SHI and the main teleconnection indices of the Northern Hemisphere are weak. Regression analysis has shown that teleconnection indices are not able to reproduce the interannual variability and trends in the SH. The inclusion of regional surface temperature in the regression model provides closer agreement between the original and reconstructed SHI.
Abstract
This study investigates variability in the intensity of the wintertime Siberian high (SH) by defining a robust SH index (SHI) and correlating it with selected meteorological fields and teleconnection indices. A dramatic trend of –2.5 hPa decade−1 has been found in the SHI between 1978 and 2001 with unprecedented (since 1871) low values of the SHI. The weakening of the SH has been confirmed by analyzing different historical gridded analyses and individual station observations of sea level pressure (SLP) and excluding possible effects from the conversion of surface pressure to SLP.
SHI correlation maps with various meteorological fields show that SH impacts on circulation and temperature patterns extend far outside the SH source area extending from the Arctic to the tropical Pacific. Advection of warm air from eastern Europe has been identified as the main mechanism causing milder than normal conditions over the Kara and Laptev Seas in association with a strong SH. Despite the strong impacts of the variability in the SH on climatic variability across the Northern Hemisphere, correlations between the SHI and the main teleconnection indices of the Northern Hemisphere are weak. Regression analysis has shown that teleconnection indices are not able to reproduce the interannual variability and trends in the SH. The inclusion of regional surface temperature in the regression model provides closer agreement between the original and reconstructed SHI.
Abstract
Anthropogenic influences are expected to cause the probability distribution of weather variables to change in nontrivial ways. This study presents simple nonparametric methods for exploring and comparing differences in pairs of probability distribution functions. The methods are based on quantiles and allow changes in all parts of the probability distribution to be investigated, including the extreme tails. Adjusted quantiles are used to investigate whether changes are simply due to shifts in location (e.g., mean) and/or scale (e.g., variance). Sampling uncertainty in the quantile differences is assessed using simultaneous confidence intervals calculated using a bootstrap resampling method that takes account of serial (intraseasonal) dependency. The methods are simple enough to be used on large gridded datasets. They are demonstrated here by exploring the changes between European regional climate model simulations of daily minimum temperature and precipitation totals for winters in 1961–90 and 2071–2100. Projected changes in daily precipitation are generally found to be well described by simple increases in scale, whereas minimum temperature exhibits changes in both location and scale.
Abstract
Anthropogenic influences are expected to cause the probability distribution of weather variables to change in nontrivial ways. This study presents simple nonparametric methods for exploring and comparing differences in pairs of probability distribution functions. The methods are based on quantiles and allow changes in all parts of the probability distribution to be investigated, including the extreme tails. Adjusted quantiles are used to investigate whether changes are simply due to shifts in location (e.g., mean) and/or scale (e.g., variance). Sampling uncertainty in the quantile differences is assessed using simultaneous confidence intervals calculated using a bootstrap resampling method that takes account of serial (intraseasonal) dependency. The methods are simple enough to be used on large gridded datasets. They are demonstrated here by exploring the changes between European regional climate model simulations of daily minimum temperature and precipitation totals for winters in 1961–90 and 2071–2100. Projected changes in daily precipitation are generally found to be well described by simple increases in scale, whereas minimum temperature exhibits changes in both location and scale.
Abstract
Probabilistic forecasts of atmospheric variables are often given as relative frequencies obtained from ensembles of deterministic forecasts. The detrimental effects of imperfect models and initial conditions on the quality of such forecasts can be mitigated by calibration. This paper shows that Bayesian methods currently used to incorporate prior information can be written as special cases of a beta-binomial model and correspond to a linear calibration of the relative frequencies. These methods are compared with a nonlinear calibration technique (i.e., logistic regression) using real precipitation forecasts. Calibration is found to be advantageous in all cases considered, and logistic regression is preferable to linear methods.
Abstract
Probabilistic forecasts of atmospheric variables are often given as relative frequencies obtained from ensembles of deterministic forecasts. The detrimental effects of imperfect models and initial conditions on the quality of such forecasts can be mitigated by calibration. This paper shows that Bayesian methods currently used to incorporate prior information can be written as special cases of a beta-binomial model and correspond to a linear calibration of the relative frequencies. These methods are compared with a nonlinear calibration technique (i.e., logistic regression) using real precipitation forecasts. Calibration is found to be advantageous in all cases considered, and logistic regression is preferable to linear methods.
Abstract
The clustering in time (seriality) of extratropical cyclones is responsible for large cumulative insured losses in western Europe, though surprisingly little scientific attention has been given to this important property. This study investigates and quantifies the seriality of extratropical cyclones in the Northern Hemisphere using a point-process approach. A possible mechanism for serial clustering is the time-varying effect of the large-scale flow on individual cyclone tracks. Another mechanism is the generation by one “parent” cyclone of one or more “offspring” through secondary cyclogenesis. A long cyclone-track database was constructed for extended October–March winters from 1950 to 2003 using 6-h analyses of 850-mb relative vorticity derived from the NCEP–NCAR reanalysis. A dispersion statistic based on the variance-to-mean ratio of monthly cyclone counts was used as a measure of clustering. It reveals extensive regions of statistically significant clustering in the European exit region of the North Atlantic storm track and over the central North Pacific. Monthly cyclone counts were regressed on time-varying teleconnection indices with a log-linear Poisson model. Five independent teleconnection patterns were found to be significant factors over Europe: the North Atlantic Oscillation (NAO), the east Atlantic pattern, the Scandinavian pattern, the east Atlantic–western Russian pattern, and the polar–Eurasian pattern. The NAO alone is not sufficient for explaining the variability of cyclone counts in the North Atlantic region and western Europe. Rate dependence on time-varying teleconnection indices accounts for the variability in monthly cyclone counts, and a cluster process did not need to be invoked.
Abstract
The clustering in time (seriality) of extratropical cyclones is responsible for large cumulative insured losses in western Europe, though surprisingly little scientific attention has been given to this important property. This study investigates and quantifies the seriality of extratropical cyclones in the Northern Hemisphere using a point-process approach. A possible mechanism for serial clustering is the time-varying effect of the large-scale flow on individual cyclone tracks. Another mechanism is the generation by one “parent” cyclone of one or more “offspring” through secondary cyclogenesis. A long cyclone-track database was constructed for extended October–March winters from 1950 to 2003 using 6-h analyses of 850-mb relative vorticity derived from the NCEP–NCAR reanalysis. A dispersion statistic based on the variance-to-mean ratio of monthly cyclone counts was used as a measure of clustering. It reveals extensive regions of statistically significant clustering in the European exit region of the North Atlantic storm track and over the central North Pacific. Monthly cyclone counts were regressed on time-varying teleconnection indices with a log-linear Poisson model. Five independent teleconnection patterns were found to be significant factors over Europe: the North Atlantic Oscillation (NAO), the east Atlantic pattern, the Scandinavian pattern, the east Atlantic–western Russian pattern, and the polar–Eurasian pattern. The NAO alone is not sufficient for explaining the variability of cyclone counts in the North Atlantic region and western Europe. Rate dependence on time-varying teleconnection indices accounts for the variability in monthly cyclone counts, and a cluster process did not need to be invoked.
Abstract
The skill of weather and climate forecast systems is often assessed by calculating the correlation coefficient between past forecasts and their verifying observations. Improvements in forecast skill can thus be quantified by correlation differences. The uncertainty in the correlation difference needs to be assessed to judge whether the observed difference constitutes a genuine improvement, or is compatible with random sampling variations. A widely used statistical test for correlation difference is known to be unsuitable, because it assumes that the competing forecasting systems are independent. In this paper, appropriate statistical methods are reviewed to assess correlation differences when the competing forecasting systems are strongly correlated with one another. The methods are used to compare correlation skill between seasonal temperature forecasts that differ in initialization scheme and model resolution. A simple power analysis framework is proposed to estimate the probability of correctly detecting skill improvements, and to determine the minimum number of samples required to reliably detect improvements. The proposed statistical test has a higher power of detecting improvements than the traditional test. The main examples suggest that sample sizes of climate hindcasts should be increased to about 40 years to ensure sufficiently high power. It is found that seasonal temperature forecasts are significantly improved by using realistic land surface initial conditions.
Abstract
The skill of weather and climate forecast systems is often assessed by calculating the correlation coefficient between past forecasts and their verifying observations. Improvements in forecast skill can thus be quantified by correlation differences. The uncertainty in the correlation difference needs to be assessed to judge whether the observed difference constitutes a genuine improvement, or is compatible with random sampling variations. A widely used statistical test for correlation difference is known to be unsuitable, because it assumes that the competing forecasting systems are independent. In this paper, appropriate statistical methods are reviewed to assess correlation differences when the competing forecasting systems are strongly correlated with one another. The methods are used to compare correlation skill between seasonal temperature forecasts that differ in initialization scheme and model resolution. A simple power analysis framework is proposed to estimate the probability of correctly detecting skill improvements, and to determine the minimum number of samples required to reliably detect improvements. The proposed statistical test has a higher power of detecting improvements than the traditional test. The main examples suggest that sample sizes of climate hindcasts should be increased to about 40 years to ensure sufficiently high power. It is found that seasonal temperature forecasts are significantly improved by using realistic land surface initial conditions.