Abstract
A number of statistical downscaling models are applied to the Canadian Climate Centre general circulation model (CCCM) outputs to provide climate change estimates for local daily surface temperature at a network of 39 stations in central and western Europe. Several different linear downscaling methods (multiple linear regression of gridded values, multiple linear regression of principal components, canonical correlation analysis) are applied to a number of sets of predictors (500- and 1000-hPa heights, 850-hPa temperature, 1000–500-hPa thickness, and their combinations) defined on a grid over the Euro–Atlantic domain. The temperature change estimates are shown to vary widely among the methods as well as among the predictors, both in their areal mean, spatial pattern, and elevation dependence in the Alpine region. The most counterintuitive result is the dependence of the estimated warming on the number of principal components (PCs) of predictors: the larger the number of PCs, the higher the warming. The mechanisms of these dependencies are identified and discussed. In particular, the necessity to include among predictors the variables able to describe the radiation-induced changes, not only the circulation-induced ones, is underlined. For this reason, the use of sea level pressure or 1000-hPa heights as the only predictor in statistical downscaling leads to unrealistically low temperature change estimates. The main problem in applications of statistical downscaling to estimation of climate change appears to be that downscaling models are fitted predominantly to a short-term variability, but the focus in studies of future climate change is on variations on long-term (decadal) time scales.
Corresponding author address: Radan Huth, Institute of Atmospheric Physics, Boční II 1401, 141 31 Prague 4, Czech Republic. Email: huth@ufa.cas.cz