Search Results

You are looking at 1 - 4 of 4 items for

  • Author or Editor: Victor Venema x
  • All content x
Clear All Modify Search
Tobias Sauter and Victor Venema

Abstract

The paper presents an approach for conditional airmass classification based on local precipitation rate distributions. The method seeks, within the potential region, three-dimensional atmospheric predictor domains with high impact on the local-scale phenomena. These predictor domains are derived by an algorithm consisting of a clustering method, namely, self-organizing maps, and a nonlinear optimization method, simulated annealing. The findings show that the resulting spatial structures can be attributed to well-known atmospheric processes. Since the optimized predictor domains probably contain relevant information for precipitation generation, these grid points may also be potential inputs for nonlinear downscaling methods. Based on this assumption, the potential of these optimized large-scale predictors for downscaling has been investigated by applying an artificial neural network as a nonparametric statistical downscaling model. Compared to preset local predictors, using the optimized predictors improves the accuracy of the downscaled time series, particularly in summer and autumn. However, optimizing predictors by a conditional classification does not guarantee that a predictor increases the explained variance of the downscaling model. To study the contribution of each predictor to the output variance, either individually or by interactions with other parameters, the sources of uncertainty have been estimated by global sensitivity analysis, which provides model-free sensitivity measures. It is shown that predictor interactions play an important part in the modeling process and should be taken into account in the predictor screening.

Full access
Peter Domonkos, José A. Guijarro, Victor Venema, Manola Brunet, and Javier Sigró

Abstract

The aim of time series homogenization is to remove nonclimatic effects, such as changes in station location, instrumentation, observation practices, and so on, from observed data. Statistical homogenization usually reduces the nonclimatic effects but does not remove them completely. In the Spanish “MULTITEST” project, the efficiencies of automatic homogenization methods were tested on large benchmark datasets of a wide range of statistical properties. In this study, test results for nine versions, based on five homogenization methods—the adapted Caussinus-Mestre algorithm for the homogenization of networks of climatic time series (ACMANT), “Climatol,” multiple analysis of series for homogenization (MASH), the pairwise homogenization algorithm (PHA), and “RHtests”—are presented and evaluated. The tests were executed with 12 synthetic/surrogate monthly temperature test datasets containing 100–500 networks with 5–40 time series in each. Residual centered root-mean-square errors and residual trend biases were calculated both for individual station series and for network mean series. The results show that a larger fraction of the nonclimatic biases can be removed from station series than from network-mean series. The largest error reduction is found for the long-term linear trends of individual time series in datasets with a high signal-to-noise ratio (SNR), where the mean residual error is only 14%–36% of the raw data error. When the SNR is low, most of the results still indicate error reductions, although with smaller ratios than for large SNR. In general, ACMANT gave the most accurate homogenization results. In the accuracy of individual time series ACMANT is closely followed by Climatol, and for the accurate calculation of mean climatic trends over large geographical regions both PHA and ACMANT are recommended.

Restricted access
Karsten Haustein, Friederike E. L. Otto, Victor Venema, Peter Jacobs, Kevin Cowtan, Zeke Hausfather, Robert G. Way, Bethan White, Aneesh Subramanian, and Andrew P. Schurer

Abstract

The early twentieth-century warming (EW; 1910–45) and the mid-twentieth-century cooling (MC; 1950–80) have been linked to both internal variability of the climate system and changes in external radiative forcing. The degree to which either of the two factors contributed to EW and MC, or both, is still debated. Using a two-box impulse response model, we demonstrate that multidecadal ocean variability was unlikely to be the driver of observed changes in global mean surface temperature (GMST) after AD 1850. Instead, virtually all (97%–98%) of the global low-frequency variability (>30 years) can be explained by external forcing. We find similarly high percentages of explained variance for interhemispheric and land–ocean temperature evolution. Three key aspects are identified that underpin the conclusion of this new study: inhomogeneous anthropogenic aerosol forcing (AER), biases in the instrumental sea surface temperature (SST) datasets, and inadequate representation of the response to varying forcing factors. Once the spatially heterogeneous nature of AER is accounted for, the MC period is reconcilable with external drivers. SST biases and imprecise forcing responses explain the putative disagreement between models and observations during the EW period. As a consequence, Atlantic multidecadal variability (AMV) is found to be primarily controlled by external forcing too. Future attribution studies should account for these important factors when discriminating between externally forced and internally generated influences on climate. We argue that AMV must not be used as a regressor and suggest a revised AMV index instead [the North Atlantic Variability Index (NAVI)]. Our associated best estimate for the transient climate response (TCR) is 1.57 K (±0.70 at the 5%–95% confidence level).

Open access
Elizabeth C. Kent, John J. Kennedy, Thomas M. Smith, Shoji Hirahara, Boyin Huang, Alexey Kaplan, David E. Parker, Christopher P. Atkinson, David I. Berry, Giulia Carella, Yoshikazu Fukuda, Masayoshi Ishii, Philip D. Jones, Finn Lindgren, Christopher J. Merchant, Simone Morak-Bozzo, Nick A. Rayner, Victor Venema, Souichiro Yasui, and Huai-Min Zhang

Abstract

Global surface temperature changes are a fundamental expression of climate change. Recent, much-debated variations in the observed rate of surface temperature change have highlighted the importance of uncertainty in adjustments applied to sea surface temperature (SST) measurements. These adjustments are applied to compensate for systematic biases and changes in observing protocol. Better quantification of the adjustments and their uncertainties would increase confidence in estimated surface temperature change and provide higher-quality gridded SST fields for use in many applications.

Bias adjustments have been based on either physical models of the observing processes or the assumption of an unchanging relationship between SST and a reference dataset, such as night marine air temperature. These approaches produce similar estimates of SST bias on the largest space and time scales, but regional differences can exceed the estimated uncertainty. We describe challenges to improving our understanding of SST biases. Overcoming these will require clarification of past observational methods, improved modeling of biases associated with each observing method, and the development of statistical bias estimates that are less sensitive to the absence of metadata regarding the observing method.

New approaches are required that embed bias models, specific to each type of observation, within a robust statistical framework. Mobile platforms and rapid changes in observation type require biases to be assessed for individual historic and present-day platforms (i.e., ships or buoys) or groups of platforms. Lack of observational metadata and high-quality observations for validation and bias model development are likely to remain major challenges.

Open access