Abstract

In a comparative experiment, the sequence of daily maximum and minimum temperatures for 1976 was interpolated over England and Wales to a resolution of 1 km using partial thin plate splines, ordinary kriging, trend surface, and an automatic inverse-distance-weighted method of interpolation. A “level playing field” for comparing the estimation accuracies was established through the incorporation of a consistent set of guiding variables in all interpolators. Once variables were included to guide the interpolators, differences in estimation accuracy among partial thin plate splines, ordinary kriging, and inverse distance weighting results were not significant although the performance of trend surface analysis was poorer. Best accuracies were achieved using partial thin plate splines, with jackknife cross-validation root-mean-square errors of 0.8°C for an annual series of daily maximum temperatures and 1.14°C for daily minimum temperatures. The results from this study suggest that sole reliance on the selection of guiding variables can be a less efficient means of achieving the required accuracies than the placing of greater reliance on empirical techniques of interpolation that can account for known autocorrelation in the temperature data. The use of guiding variables narrows the gap between performance of the different interpolation methods. In general, however, more sophisticated interpolators such as kriging and splining require fewer guiding covariates to achieve similar estimation accuracies. Day-to-day variability in the interpolation accuracies confirms the need for increased adaptability in the manner in which the guiding variables are incorporated in the interpolation process.

Introduction

Daily weather conditions are known to influence the growth and development of many biological organisms. It is therefore surprising that relatively few ecological or environmental studies focus upon the interpolation of continuous national coverages of daily temperature using now well-developed geostatistical or splining techniques (e.g., Landau and Barnett 1996). More often, long-established and mathematically simpler methods such as trend surface analysis (e.g., Russo et al. 1993; Goodale et al. 1998), inverse distance weighting (e.g., Supit 1997; Van der Goot 1998) or methods based on vertical lapse rates (e.g., Aber and Federer 1992; Running et al. 1987) are used.

It has been postulated that the actual interpolation algorithm used is less critical than the incorporation of environmentally determined gridded variables to augment the interpolation procedure (Cornford 1997; Lennon and Turner 1995), although multiple guiding variables are rarely used in either applied studies or research comparing interpolator performance in studies at a daily time step. Certainly, as reported in part 1 (Jarvis and Stuart 2001, this issue), both Bolstad et al. (1998) and Landau and Barnett (1996) are able to relate patterns of uncertainty in their interpolated temperature surfaces with environmental influences such as proximity to the coast or urban heat islands. The payoff between using increasing numbers of guiding topographic and land cover–related variables within the interpolation process as compared with the use of more complex interpolation algorithms, however, remains poorly explored. The aim of this paper is therefore to investigate the balance between improving interpolation accuracies by means of additional guiding variables versus the use of more mathematically sophisticated interpolation algorithms.

In the literature reporting the interpolation of daily meteorological data from synoptic stations over wide ranging areas, Collins and Bolstad (1996) compare a variety of interpolation techniques for daily maximum and minimum temperatures over the Appalachian Mountains. Their conclusions suggest polynomial regression (trend surface plus covariates) is preferable to other local techniques such as inverse distance weighting. In a study of greater depth for the same area (Bolstad et al. 1998), second-degree trend models (including elevation, termed “regional” regression) were found superior to kriging, regional (Voronoi determined), and local lapse rate models (empirically determined). Although their treatment of regression was strong, the geostatistical treatment was less robust; for example, stationarity assumptions were ignored despite the probability of drift in an area of highly variable terrain. Practical problems with variogram modeling also led the authors to dismiss the kriging technique. It can be argued that comparisons between interpolation algorithms that fail to incorporate elevation as a guiding variable in all cases are inherently biased. In contrast, for example, Ishida and Kawashima's (1993) study interpolating daily temperature means derived from the Japanese synoptic records over a limited area to resolution of 250 m used a consistent treatment of elevation among regression, kriging, and cokriging methods and showed that cokriging performed most accurately.

Using a dataset similar to that available for this study, Landau and Barnett (1996) demonstrate the feasibility of producing national daily interpolated temperature surfaces for England and Wales using a selection of methods including ordinary kriging. However, as part 1 (Jarvis and Stuart 2001) identifies, the use of constant coefficient values in the interpolation function for every day of the year in order to reduce computational complexity would seem questionable. Landau's study also has limitations that arise by disregarding processes, for example, by using third-order elevation variables within the trend surface equations when the laws of physics dictate the use of linear lapse rates. Among these studies, the exploration of partial thin plate spline performance in comparison with kriging for daily (as opposed to monthly or annual) data is unusual (e.g., Laughlin et al. 1993). Whether partial thin plate splines have a tendency to “oversmooth” daily temperatures or lead to lower absolute error owing to their automatic parameter-setting facility has been unclear until now for interpolation at the daily time step for national extents.

A further issue arising from previous comparative studies is the need to use a dataset of sufficient volume such that no particular method (e.g., kriging) is theoretically disadvantaged a priori (e.g., Nalder and Wein 1998). From an applied modeling perspective, for which daily sequences of interpolated data are required throughout the annual cycle, greater rigor in error analyses also is probably warranted. For example, the use of rms error in preference (or addition) to correlation coefficients (e.g., Landau and Barnett 1996), an increase in the number of days used to generate accuracy results in view of hypothesized seasonal and day-to-day variations in error [e.g., Collins and Bolstad (1996) consider a 10-day average], and use of semi-independent (jackknifed), if not fully independent, data to verify results [e.g., Landau and Barnett (1996) provide nonindependent results] would improve confidence in reported accuracy results.

This review has highlighted a number of research issues relating to the interpolation of daily temperature as input to applied environmental models. Principally, this paper will investigate the payoff between incorporating increasing numbers of topoclimatic and land cover–related variables versus the use of interpolation methods of varying sophistication. The work will use the selection of “guiding variables” discussed in part 1 (Jarvis and Stuart 2001). It will also serve as a comparison among commonly used algorithms for the interpolation of daily maximum and minimum temperatures, because the comprehensive attention to choice of guiding variables and data volumes used allows a “level playing field” among methods that is not always realized in other experimental comparisons (e.g., Bolstad et al. 1998). The methods explored include ordinary kriging, partial thin plate splines, inverse distance weighting, and trend surface analysis. Ordinary kriging and partial thin plate splines have rarely been discussed either singly or in comparison at this temporal scale. Because the purpose of generating daily temperatures will be to produce inputs to applied environmental and biological models, methods to derive calibration parameters such as the distance function for inverse distance weighting, the range and sill for ordinary kriging, and the smoothing parameter for partial thin plate splines automatically have also been devised.

Methods

Study area and data

Temperature interpolations were carried out over the complete landscape of England and Wales. Archived U.K. Meteorological Office (UKMO) data from 174 well-distributed stations, obtained under a quality assured agreement, were used [refer to Jarvis and Stuart (2001), their Fig. 1]. The principal raster data used was the Ordnance Survey Panorama digital elevation model at 50-m resolution for England and Wales. To begin to accommodate alterations of temperature that are known to arise as a result of urbanization, the elevation data were supplemented by “urban” and “suburban” land cover classes from the Landsat-derived Institute of Terrestrial Ecology land cover dataset.

Fig. 1.

Incorporation of multiple linear covariates within different interpolation methods

Fig. 1.

Incorporation of multiple linear covariates within different interpolation methods

Topoclimatic and land cover–related variables to guide the interpolation of temperature

Part 1 explored the derivation and selection by regression analysis of variables appropriate for assisting the process of interpolating maximum and minimum daily temperatures over England and Wales. Here, the principal guiding variables so determined were used to guide the interpolation of maximum and minimum daily temperatures over England and Wales to a grid spacing of 1 km.

Mathematical interpolation methods

The interpolation techniques used were chosen either because of their common usage in agricultural applications (trend surface analysis, inverse distance weighting) or for their perceived advantages for climatic and phenological interpolations at a variety of spatial and temporal scales (ordinary kriging, partial thin plate splines).

Although no proprietary interpolation or geographical information system GIS package was able to meet the requirement to produce streams of interpolated results using all four methods in an efficient manner, a variety of public-domain geostatistical software is available (Varekamp et al. 1996). This study makes use of the GSLIB geostatistical software library (Deutsch and Journel 1992) and the VARIOWIN variogram modeling tool (Pannatier 1996). In addition, the ANUSPLIN FORTRAN program (Hutchinson 1991) for the computation of partial thin plate splines was made available to this study. These modules were linked with subroutines to provide a custom-built interpolation suite that allowed the interpolation of daily data throughout the annual cycle to a resolution of 1 km at any grid location within England and Wales. This software also facilitated a cross-validation estimate of error in each interpolation surface to be computed by jackknifing at the location of known data points on a daily basis.

Incorporation of guiding variables as part of the interpolation process

The inclusion of guiding variables selected using multiple linear regression (Jarvis and Stuart 2001) within the interpolation process was implemented without prejudice to any particular interpolation method. Interpolating residuals from the regression analyses (Figure 1a) provides a means of incorporating gridded information to enhance inverse distance weighting, and is critical to the success of ordinary kriging where trend exists in the data. These results must subsequently be “retrended” to construct the final interpolated estimates of temperature. The ability to “guide” interpolations using linearly related variables is facilitated directly within the partial thin plate spline technique and may equally be carried out as part of trend surface equations. For both partial thin plate splines and trend surfaces, therefore, the software allows the incorporation of linear covariates within the overall interpolation process (Fig. 1b). Although the differing structures of the interpolation algorithms meant that linear guiding covariates were included in different ways operationally, in functional terms equivalent data were included to guide each interpolator.

Automatic parameter selection

The need for a daily series of interpolated results for input to multitemporal ecological models necessitated that, wherever possible, parameters of the interpolation models should be adjusted automatically to fit daily varying weather conditions. The degree to which this is possible for the individual techniques and the relative ease with which it may be accomplished are noted in the following summaries of methods.

General interpolation functions

The general interpolation functions are summarized as follows (after Mitás and Mitásová 1999). Where a phenomenon zj exists and there are N known data values of a phenomenon measured at discrete points rj = (xj[1], xj[2], … , xj[d]) (in d dimensions), then the d-dimensional interpolation function F(r) fulfills the condition:

 
F(r) = zjj = l, … , N.
(1)

This represents the exact interpolator, corresponding to inverse-distance-weighted and kriging methods. In the case of partial thin plate splines and trend surface analysis, smoothing is also introduced within the interpolation functions. Although smoothing has in the past been regarded as a disadvantage, when interpolating temperatures the introduction of smoothing can be regarded as beneficial in that it acknowledges the likelihood of inaccuracies in climatic datasets and the importance of microclimatic effects.

Trend surface analysis

Trend surface functions commonly form part of the interpolation suites of proprietary GIS software, but usually comprise standard functions relating to the geographical coordinates only. To incorporate the guiding topographic variables, the concept is extended to include additional (nongeographical) covariates. The term “polynomial regression” is avoided as a descriptor, because in much of the literature it is used without reference to geographical coordinates.

 
formula

The trend function is a simple linear combination of monomials. Illustrated in this case is the form of a second-order trend with weights a in two-dimensional geographical space, together with d − 2 additional linear covariates:

 
formula

The theoretical literature on the interpolation of geographic data suggests that high-order trend surfaces may not produce accurate results, yet in the insect phenology literature, such trend surfaces are used without discussion. In this case, linear guiding variables were supplemented by second-order trends in the spatial coordinates as the standard method, with third- and fourth-order spatial models also compared (again using the best-performing linear covariates) as a check.

Optimal inverse distance weighting (IDW)

The general form of the IDW function used was:

 
formula

where m is a number of closest points and p is the power parameter representing the decay in similarity between values over distance. To incorporate the guiding variables as part of the analysis, the residuals from linear regression on rj, rather than the raw values themselves, were used. The estimator therefore comprised both the linear topoclimatic trend [T(r)] plus the IDW function applied to the residuals.

Drawing on literature (e.g., Hodgson 1993) that suggests the search parameter m is less critical than the power function p to the resulting error in interpolated estimates, the Levenberg–Marquardt maximum likelihood function (Press et al. 1992) was used to select an automatic power function using a set radius of 12 points. This optimal power function was recomputed on a daily basis.

Ordinary kriging (GSLIB code; Deutsch and Journel 1992)

In essence, as with inverse distance weighting, the estimation functions for ordinary kriging are a form of locally weighted average in which the weights are derived following an initial investigation into the spatial structure of the data (variogram modeling). Kriging however, unlike IDW, also takes into account the relative positions of the contributory sample data points. In addition, the property being estimated is treated as a regionalized variable; that is, as a random variable whose variation over space can be regarded statistically. The random process (in this study, daily maximum and minimum temperatures) at the set of points x contains the following components (Cressie 1991, p. 113):

 
formula

Kriging's intrinsic hypotheses require that all global trends should be eliminated from a dataset before attempting to krige. As for inverse distance weighting therefore, in this study the experimental variograms are constructed using residuals from regression following the topoclimatic analysis. The degree of spatially correlated error is estimated from a sample semivariogram constructed using the given data z(ri) and assuming stationarity:

 
formula

where N(h) is the number of pairs of the observations separated by distance h.

Subroutines gamv2 and okb2d from GSLIB (Deutsch and Journel 1992) were incorporated in order to construct experimental variograms and implement kriging functions, respectively. Given the debate regarding the advisability of fitting variogram models automatically (e.g., Philip and Watson 1986; Deutsch and Journel 1992), an initial set of variograms for both maximum and minimum temperatures under a variety of seasons and weather conditions was computed interactively using the VARIOWIN software (Pannatier 1996). This preliminary process was used both to set the form of variogram model to be used throughout the model runs to the empirical form γ(h) = c[1 − exp(−h/a)] and to check the operation of an automatic variogram fitting routine implemented using the Levenberg–Marquardt method (Press et al. 1992, 678–683).

Partial thin plate splines (ANUSPLIN code; Hutchinson 1991)

Smoothing splines allow both the smoothness and exactness of a surface fit to be considered together. In the case of the partial thin plate splines used for this study, linear submodels are incorporated, the coefficients for which may be determined simultaneously within the solution of the spline (Wahba 1983). The partial thin plate spline methodology of Hutchinson (1991), unlike the spline functions implemented in proprietary GIS, uses automatic generalized cross validation to optimize the smoothing function so that oversmoothing does not produce significant departures from the data points. In this case, the z(xi) are estimated by the (suitably smooth) function that minimizes.

 
formula

where λ is the smoothing parameter, I(F) is the “smoothness seminorm,” Ψ is the Euclidean distance scaling function with coefficients B, and p is the number of partial linear covariates. The smoothness seminorm is a measure of roughness of the function defined in terms of its order of derivative relative to the function (Hutchinson 1991). Because each parametric model uses all data points in its construction, the effect of incorporating the topoclimatic and land cover–related variables is equivalent to that achieved by “detrending” the IDW and ordinary kriging functions (above). In the thin plate spline seminorm used by Hutchinson (1991), which depends only on distance and is thus isotropic, this minimization is subject both to an exactness condition and a competing smoothing condition.

This automation of critical parameters within the partial thin plate spline process is important in the context of this study, for which surface equations are required for multiple days. Although the order of spline may theoretically be optimized as part of this process, limitations in the spline software used (ANUSPLIN, Hutchinson 1991) necessitated the order to be defined ahead of time. A number of possible spline models of varying order were selected for consideration. In addition, the form of the spline function was varied (Table 1). On the evidence of Laughlin et al.'s (1993) use of trivariate splines for modeling frost hollow effects upon daily temperature minima, trials of both bivariate and trivariate splines were warranted. In the 3D model, elevation was used as the third independent splining factor; in the 2D model, elevation was one of a number of partial linear covariates determined by the regression analyses.

Table 1.

Spline models used within experiments

Spline models used within experiments
Spline models used within experiments

Experiments

Using the methodological structure described in section 2c above, the results of interpolating daily maximum and minimum temperatures over England and Wales throughout the annual cycle are discussed with reference to the following issues:

  • the effect of increasing numbers of guiding covariates on interpolation accuracy, for trend surface analysis, partial thin plate splines, inverse distance weighting, and ordinary kriging; and

  • consideration of the “best” interpolation technique for the purpose of interpolating maximum and minimum temperatures for use in subsequent ecological modeling contexts.

Methods for evaluating the accuracy of interpolations vary considerably within the literature. Because the size of the initial dataset available for this study (174 points over England and Wales) was near the minimum appropriate for adequate variogram construction, no data were reserved for fully independent testing. Rather, jackknife cross validation was used to construct directly comparable error statistics for each interpolation technique.

In this paper, interpolation results are presented from two quantitative perspectives. First, daily root-mean-square (rms) error, computed using semi-independent residuals (actual minus estimated temperature) from the jackknife cross-validation process, was averaged for 1976. Second, the daily variations in error throughout 1976 are reviewed using rms statistics. Such temporally aggregated reporting is common within the literature, but it is still unusual to analyze the errors on a daily basis over a complete annual cycle.

Results

Trials to settle the form and parameters of each interpolation function

Trend surface

For maximum and minimum temperatures, considerable improvement in interpolation accuracy was gained by increasing the order of the trend connected with the spatial coordinates to 3 while retaining linear guiding variables.

Partial thin plate splines

Both second- and third-order splines were explored, as were splines that incorporated elevation as an adaptive, independent spline variable rather than a linear (partial) covariate. Increasing the order of spline above the second-order model worsened accuracies for maximum temperatures more than for minima, increasing the average daily rms error by up to 0.4 (40% of the best estimate). The results confirmed the choice of “standard” second-order splines for interpolating both maximum and minimum daily temperatures. Further improvements were best obtained by incorporating additional covariates. It was possible that the technique would oversmooth the results, for minimum temperatures in particular, but diagnostic trace parameters reported over the full annual cycle of 1976 confirmed that this oversmoothing rarely occurred.

Detrended inverse distance weighting

Of major importance in the case of inverse-distance-weighted interpolation is the power parameter, selected automatically in this study by the Levenberg–Marquardt method. Although a maximum power of 10 was used to constrain the interpolation system, this default was necessary only for a minority of days in the year (approximately 4–5 for both maximum and minimum temperatures) when little spatial autocorrelation (or unrepresented trend) was present in the data. On average, the power parameter was estimated at 1.77 for maximum temperatures and 1.99 for minima.

Detrended ordinary kriging

As when computing the power parameter each day for inverse distance weighting, the computational overhead associated with automatically computing daily variograms was considerable. Following visual inspection of datasets for nine days of differing weather type and season, the exponential model was selected for sole use in the automatic analyses to minimize the number of possible options.

Variations of the variogram range, both for maxima and minima, were considerable. The average range for minimum temperatures (82 km) was much shorter than for maxima (154 km) and had a slightly higher standard deviation throughout the year. In terms of modal distribution however, the modal average range for maxima is 172 km and yet is only 22 km for minima.

Quantitative comparison between interpolation methods

Sensitivity of interpolation techniques to the incorporation of guiding covariates

This section reports results showing the relative improvement of standard interpolation techniques with additional covariates known to be statistically significant from previous regression analyses. The working hypothesis for this paper is that making use of spatial autocorrelation to predict temperatures may be a more efficient way of increasing interpolation accuracies than continuing to add more than a small number of further guiding covariates. Sophisticated interpolators such as kriging should be able to use the power of spatial coherence in the temperature data to compensate for a lack of knowledge of localized processes, as long as major trends in the data have been removed. This should not, however, be interpreted to suggest that the improved incorporation of process has little theoretical validity, but rather that taking advantage of spatial correlation in the temperature data may be more effective than relying on an increasingly complex but crude linear additive model to represent these processes.

Accuracies of the interpolated daily maximum and minimum temperatures, by number of guiding variables and interpolation technique, are illustrated in Fig. 2. The results summarize the daily rms values and the variability of the residual errors using their standard deviation, aggregated to provide annual averages. For splining and trend surface methods, the benefits of more than one additional covariate are minimal for maximum temperatures (Fig. 2a). For minimum temperatures, up to three additional covariates improve the average daily rms errors. Relative to the figures for trend surface analysis, partial thin plate splines perform considerably better with fewer guiding variables. This result may be attributed to the intrinsic capability of the partial thin plate spline method in comparison with the less flexible and predetermined form of the trend surface. With an increased number of covariates however. The rms differences between techniques remain at approximately 0.15°C.

Fig. 2.

Annual average rms error by interpolation method and number of covariates of (a) maximum and (b) minimum daily temperatures, 1976

Fig. 2.

Annual average rms error by interpolation method and number of covariates of (a) maximum and (b) minimum daily temperatures, 1976

When compared with the two global methods reported above, aggregate rms accuracy results for inverse-distance-weighted interpolation fall between the two techniques reported so far. The benefit of additional covariates is marginal after six variables in the case of maxima (Fig. 2a) and after five for minima (Fig. 2b), with a more gradual reduction in error than for trend surface interpolation. Comparisons between detrended ordinary kriging as the more sophisticated local interpolator and automated inverse distance weighting for both maximum and minimum temperatures show surprisingly small differences in rms error. Overall, the results of Fig. 2 indicate that the rewards to be gained by incorporating increasing numbers of guiding variables are finite. This may, in part, relate to the problems associated with multiple linear regression and daily variability in the most appropriate gridded variables required.

Annual average error, by interpolation technique

An analysis of variance on the data summarized in Table 2 suggested that the choice of interpolator has a significant effect on resultant accuracies, but that the incorporation of additional covariates can be more influential than interpolation method alone in minimizing error. However, as Table 2 also indicates, when using appropriate guiding variables the differences in accuracy between all interpolation techniques do become small, albeit with different numbers of covariates needed to achieve this result.

Table 2.

Average daily rms error by interpolation method of maximum and minimum daily temperatures, 1976

Average daily rms error by interpolation method of maximum and minimum daily temperatures, 1976
Average daily rms error by interpolation method of maximum and minimum daily temperatures, 1976

Hutchinson (1991) has suggested that, without inferring superiority to either method, in general “the only serious competitor to thin plate splines is the method of kriging.” Table 2 suggests that while both splining and kriging perform well for the interpolation of daily maximum and minimum temperatures, accuracies using automatic inverse distance weighting are competitive (significantly different by only a small margin). Overall, partial thin plate spline methods with 2–3 independent covariates provided the best performance for interpolating both maximum and minimum temperatures in the case of this dataset (174 points) for England and Wales.

It is commonly suggested, on the basis of an empirical study (Laslett 1994), that kriging might be expected to outperform splines when interpolating irregular data. The similar performances achieved by both methods in this case, however, support Hutchinson and Gessler's (1994) theoretically based argument that thin plate splines and kriging are intrinsically similar and that, if well-calibrated, should perform with similar accuracy.

Variation in daily estimation errors throughout the year

Daily fluctuations were similar in cross-validated rms error for maximum and minimum temperatures when compared by interpolation techniques. Following the findings of section 2, which suggested that partial thin plate spline performed most accurately for this dataset, daily rms errors for maximum and minimum temperatures (Fig. 3) were graphed for this method only. That the absolute rms errors for maximum temperatures increase and decrease broadly with day length, peaking at the end of June, is not surprising. Superimposed on this overall trend, however, are a number of higher-frequency variations. A number of these error peaks may be attributed to a higher range of base temperatures under warmer than average weather for the time of year. Yeardays 65, 184, 237, and 266 provide examples of this situation. However, a moderate (r2 = 0.5) relationship between rms error and variance of the underlying temperature data was suggestive of difficulties interpolating under particular weather patterns. Peaks in error on yeardays 127, 160, 211, and 338 occurred on days of high variability rather than on days with above-average temperatures. In the case of maximum temperatures, high variability in the absence of unusually high temperatures is associated with passing fronts, for example under westerly/low pressure systems.

Fig. 3.

Daily variation in rms error (°C) over the annual sequence when interpolating maximum and minimum temperatures by partial thin plate spline interpolation, 1976

Fig. 3.

Daily variation in rms error (°C) over the annual sequence when interpolating maximum and minimum temperatures by partial thin plate spline interpolation, 1976

For minimum temperatures there was no marked relationship between daily average temperature and rms error. Rather, rms error relates more strongly to the overall range of temperatures over England and Wales to be interpolated on a particular date and, to a greater extent, their “within-day” variance throughout the country.

In comparison with other methods, partial thin plate splines were consistently accurate interpolators on a day-to-day basis. For minimum temperatures, splines similarly are associated with the lowest daily errors on most dates. In a British ecological context, minimum temperatures tend to be the principal constraint to the biological development of many species, and this consistent performance of partial thin plate splines is therefore particularly valuable. Trend surface interpolation produced large “spikes” in daily rms error for both maximum and minimum temperatures. The order of the trend was fixed on the basis of annual aggregate results to a third-order function rather than adaptively on a day-by-day basis within the interpolation system as for the other methods. In terms of both aggregate error and capturing extreme values, it is therefore not surprising that trend surface analysis should have performed particularly poorly. During the spring, however, trend surface results do on occasion outperform the other methods by a considerable margin. This result may relate to very mixed weather patterns over the country, which make temperatures highly nonstationary at this time of year. This advantage of trend surfaces must be balanced, however, against their highest prediction errors of the four techniques during the majority of the year.

When using “automatic” inverse distance weighting, larger errors were found for maximum temperatures than for minimum temperatures. The “averaging” nature of kriging is reflected in the lack of extreme error results for this technique, whose interpolation estimates will tend toward the mean in circumstances of uncertainty (e.g., sparse data).

Discussion

Areas for further exploration

The size of the daily interpolation task over the annual cycle necessitated the use of a consistent set of guiding covariates from day to day, albeit with adaptive strengths of correlation with temperature. This use is reflected in a highly variable day-to-day pattern of error (Fig. 3). As noted in the preceding paper, improving the categorization of weather pattern further and incorporating this information within the interpolation process, or accounting for the differing effects of frontal systems by using local rather than global regression, are possible means to improve further the day-to-day adaptability of the interpolators.

In addition to the adaptability of the parameters associated with covariates, automatic parameter derivation was shown to be important as part of the interpolation procedure for achieving the flexibility and robustness required for daily usage. However, automatic fitting is only justifiable in cases for which the variability of the underlying data demands it and the accuracy of fit achieved is acceptable. In this case, parameters of the interpolation methods, such as the power function or variogram range, were successfully selected using automatic methods for inverse distance weighting, kriging, and partial thin plate splines for most dates. The results suggest that automatic variogram modeling, despite its theoretical weaknesses, can be used to krige results that are competitive with those of partial thin plate splines. Improving both the computational efficiency of this task and the manner in which surface anisotropy is captured are areas worthy of further study. For all interpolation techniques, partial thin plate splines included, parameter estimation was unsuccessful on a few dates. An improved sample of data, at short lags in particular, together with an increased volume of data overall, might assist with this problem, given the national extent of the study and the limited dataset available for use (174 points).

Implications with respect to previous work

Covariate sets for estimating temperatures have been reviewed previously for daily minimum temperatures (e.g., Cornford 1997), but the effect of incorporating increasingly detailed topoclimatic factors has rarely been consistently assessed in conjunction with different interpolation methods. Indeed, many other reviews of temperature interpolations report comparative results that have been made between interpolation techniques using different degrees of detrending (e.g., Golstad et al. 1998). The results from this experimental comparison, which have not been undertaken previously for daily data over such a large area or for an annual series, suggest that including a limited number of guiding variables can allow partial thin plate splines, detrended ordinary kriging, and detrended IDW to improve their performance to similar levels of estimation accuracy for daily temperatures with this density of observations. The more sophisticated interpolation by splines and kriging required fewer covariates to achieve similar accuracies.

Our results also show that there is a limit to the number of additional guiding variables that can be included to yield improved estimation accuracy. For a well-controlled, continuous variable such as temperature, this number is relatively small (approximately 3–4 well-chosen guiding variables). This result suggests that studies such as Cornford's (1997), which focused heavily on seeking additional guiding variables, may “overengineer” the resultant model. This is particularly the case for the more mathematically sophisticated automated interpolation algorithms (partial thin plate splines and ordinary kriging) as opposed to the less spatially flexible method of trend surface analysis. In the cases of ordinary kriging and inverse distance weighting, it appears that exploiting the measurable local spatial coherence of observations can often compensate for a lack of specificity regarding physical processes. For partial thin plate splines, which adaptively model the global variability of the surface, similar conclusions may be drawn. Ordering the interpolation techniques according to their overall sophistication and flexibility (trend surface, IDW, splines = kriging), the number of guiding variables to produce the best result in terms of rms error and its variation throughout the year form a similar sequence. However, of the major climate variables, temperature is known to be particularly spatially coherent. It may be that when interpolating a more noisy variable, such as rainfall, a greater emphasis on the incorporation of process-based information will be more critical to improve interpolation accuracies. The balance of these physical and geostatistical properties will determine the most efficient method, assuming that a sufficient density of data is available to allow all methods to be implemented.

Comparisons among the overall accuracies of the interpolation techniques show that partial thin plate splines perform well for both maximum and minimum temperatures. The rms values reported in Table 2 of 0.80°C for maximum temperatures and 1.14°C for minima compare well with other results from the literature. Landau and Barnett (1996), for example, report rms accuracies of 0.96° and 1.25°C for maximum and minimum temperatures when interpolating using a slightly larger number of sites (212) for 1987. Their results for 1985 were slightly better (0.91° and 1.07°C respectively), indicating the dangers of overprecise comparisons between different datasets and years. This suggest that the performance of the minimum temperature model in this study was, in relative terms, a little poorer than anticipated. However, the rms errors reported in Table 2 were produced using jackknife cross-validation procedures, whereas Landau and Barnett's were not independent of those used for modeling either in a true or mathematical approximation sense. Cornford's (1997) bootstrapped estimate of winter minima of 1.16°C is more comparable with the jackknife approach of this study (1.01°C), because both assessments were made using semi-independent cross-validation techniques. However, Cornford was working to a landscape resolution of 500 m and used greater volumes of data and more supporting land cover information.

More general, although comparisons between methods for interpolating temperatures at a monthly level are now well established, examples interpolating daily maximum and minimum temperatures at a national level to a 1-km resolution through all seasons as in this case are rare. As Price et al. (2000) observe, considerably more effort is required when interpolating long climate histories over extensive regions relative to the interpolation of long-term averaged data. This is analogous to the differences in effort required when interpolating sequences of daily, as opposed to limited monthly, datasets as in this case. Consideration of the fluctuations in error throughout the annual cycle, in addition to providing a more robust annual aggregate error than that based on a few dates only, is important when considering the use of the interpolated data as input to ecological or environmental models. Comparisons between partial thin plate splines and ordinary kriging are unusual, as are those using “optimal” inverse distance weighting in addition, especially when considering daily as opposed to monthly datasets. This is especially the case within the context of an applied agricultural study, where traditional methods are most commonly used. This is believed to be the first comprehensive comparison among these four techniques for interpolating daily maximum and minimum temperatures, and partial thin plate splines have only occasionally been explored on a local basis for daily temperatures previously (Laughlin et al. 1993).

Conclusions

The sequence of daily maximum and minimum temperatures for 1976 were interpolated over England and Wales to a resolution of 1 km using partial thin plate splines, ordinary kriging, trend surface, and automatic inverse-distance-weighted interpolation. Rarely have these methods been compared as interpolators for daily maximum and minimum temperatures over large areas and throughout the annual cycle. By providing a level playing field among methods through the incorporation of a consistent set of guiding variables, it was established that, for this dataset, differences between partial thin plate splines, ordinary kriging, and inverse distance weighting results were not significant, although the performance of trend surface analysis was poorer. Best accuracies were achieved using partial thin plate splines, with jackknife cross-validation rms errors for maximum temperatures of 0.80° and 1.14°C for minimum temperatures. This is believed to be close to the limits achievable with the given dataset, although it is suggested that improvements to the day-to-day adaptability of the method might reduce the overall variance further. Partial thin plate splines were also computationally efficient in comparison with the other methods implemented in this study, an important consideration within a broader ecological modeling context.

The results from this study suggest that sole reliance on the selection of guiding variables, especially where the selection process uses techniques such as multivariate linear regression, is a less efficient means of achieving the required accuracies than the placing of greater reliance on empirical techniques of interpolation that can account for known autocorrelations in the temperature data. The improved use of guiding variables narrows the gap between interpolator performance considerably, although more sophisticated interpolators require fewer covariates to achieve similar accuracies. Overall, it was concluded that the inclusion of values from nearby meteorological stations into the estimation of temperatures proved to be a more efficient means of achieving greater accuracy than did increased attention to the selection of additional guiding variables.

Acknowledgments

This work was initiated under a collaborative Ph.D. studentship for Claire Jarvis at the Department of Geography, University of Edinburgh, funded by Central Science Laboratory (CSL). Thanks are owed to Dr. R. Baker (CSL) for initial work on selecting UKMO stations for analysis, to Dr. M. Hims (CSL) for his ongoing interest in the work, and to Dr. R. Harker (UKMO) for providing UKMO meteorological data. The partial thin plate spline results were computed using the ANUSPLIN software, made available to the project by Dr. M. Hutchinson, whose comments on an earlier draft of this paper were appreciated. Neil Stuart is grateful to the Royal Society of Edingburgh for the grant of a Research Fellowship for 2000, during which this paper was completed.

REFERENCES

REFERENCES
Aber
,
J. D.
and
C.
Federer
.
1992
.
A generalised, lumped parameter model of photosynthesis, evaporation and net primary production in temperate and boreal forest ecosystems.
Oecologia
92
:
463
474
.
Bolstad
,
P. V.
,
L.
Swift
,
F.
Collins
, and
J.
Régnière
.
1998
.
Measured and predicted air temperatures at basin to regional scales in the southern Appalachian Mountains.
Agric. For. Meteor
91
:
161
176
.
Collins
,
F.
and
P. V.
Balstad
.
1996
.
A comparison of spatial interpolation techniques in temperature estimation.
Proc. Third Int. Conf.-Workshop on Integrating GIS and Environmental Modelling, Santa Fe, NM, National Centre for Geographical Information and Analysis, CD-ROM
.
Cornford
,
D.
1997
.
The development and application of techniques for mapping daily minimum air temperatures.
Ph.D. thesis, The University of Birmingham, 161 pp. [Available from The University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom.]
.
Cressie
,
N.
1991
.
Statistics for Spatial Data.
John Wiley and Sons, 900 pp
.
Deutsch
,
C. V.
and
A. G.
Journel
.
1992
.
GSLIB Geostatistical Software Library and User's Guide.
Oxford University Press, 340 pp
.
Goodale
,
C. L.
,
J. D.
Aber
, and
S. V.
Ollinger
.
1998
.
Mapping precipitation, temperature and solar radiation for Ireland with polynomial regression and a digital elevation model.
Climate Res.
10
:
35
49
.
Hodgson
,
M. E.
1993
.
Sensitivity of spatial interpolation models to parameter variation.
Proc. ASPRS-ACSM Annual Convention 1992, Alberquerque, NM, American Congress on Surveying and Mapping
.
Hutchinson
,
M. F.
1991
.
The application of thin plate smoothing splines to continent-wide data assimilation.
Data Assimilation Systems, J. D. Jasper, Ed., BMRC Research Report No. 27, Bureau of Meteorology, Melbourne, Australia, 104–113. [Available from Bureau of Meteorology, G.P.O. Box 1289K, Victoria 3001, Australia.]
.
——, and
Gessler
,
P. E.
1994
.
Splines—more than just a smooth interpolator.
Geoderma
62
:
45
67
.
Ishida
,
T.
and
S.
Kawashima
.
1993
.
Use of co-kriging to estimate surface air temperature from elevation.
Theor. Appl. Climatol.
47
:
147
157
.
Jarvis
,
C. H.
and
N.
Stuart
.
2001
.
A comparison among strategies for interpolating maximum and minimum daily air temperatures.
Part I: The selection of “guiding” topographic and land cover variables. J. Appl. Meteor., 40, 1060–1074
.
Landau
,
S.
and
V.
Barnett
.
1996
.
A comparison of methods for climate data interpolation, in the context of yield predictions from winter wheat simulation models.
Aspects Biol.
46
:
13
22
.
Laslett
,
G. M.
1994
.
Kriging and splines: An empirical comparison of their predictive performance in some applications.
J. Amer. Stat. Assoc.
89
:
391
400
.
Laughlin
,
G. P.
,
M. F.
Hutchinson
, and
B. G.
Mackey
.
1993
.
An intuitive approach to analysing small point-source spatial data sets.
Int. J. GIS
7
:
21
38
.
Lennon
,
J. J.
and
J. R. G.
Turner
.
1995
.
Predicting the spatial distribution of climate: Temperature in Great Britain.
J. Anim. Ecol.
64
:
370
392
.
Mitás
,
L.
and
H.
Mitásová
.
1999
.
Spatial interpolation.
Geographical Information Systems, P. A. Longley et al., Eds., John Wiley and Sons, 481–492
.
Nalder
,
I. A.
and
R. W.
Wein
.
1998
.
Spatial interpolation of climate normals: Test of a new method in the Canadian boreal forest.
Agric. For. Meteor.
92
:
211
225
.
Pannatier
,
Y.
1996
.
VARIOWIN: Software for Spatial Data Analysis in 2D.
Springer, 91 pp
.
Philip
,
G. M.
and
D. F.
Watson
.
1986
.
Matheronian geostatistics—quo vadis?
Math. Geol.
18
:
93
117
.
Press
,
W. H.
,
S. A.
Teukolsky
,
W. T.
Vetterling
, and
B. P.
Flannery
.
1992
.
Numerical Recipes in Fortran 77.
Cambridge University Press, 933 pp
.
Price
,
D. T.
,
D. W.
McKenney
,
I. A.
Nalder
,
M. F.
Hutchinson
, and
J.
Kesteven
.
2000:,
.
A comparison of two statistical methods for spatial interpolation of Canadian monthly mean data.
Agric. For. Meteor.
101
:
81
94
.
Running
,
S. W.
,
R. R.
Nemani
, and
R. D.
Hungerford
.
1987
.
Extrapolation of synoptic meteorological data in mountainous terrain and its use for simulating forest evapotranspiration and photosynthesis. Can.
J. For. Res.
17
:
472
483
.
Russo
,
J. M.
,
A. M.
Liebhold
, and
J. G. W.
Kelley
.
1993
.
Mesoscale weather data as input to a gypsy moth (Lepidoptera Lymantriidae) Phenology model.
J. Econ. Entomol.
86
:
838
844
.
Supit
,
I.
1997
.
Predicting national wheat yields using a crop simulation and trend models.
Agric. For. Meteor.
88
:
199
214
.
Van der Goot
,
E.
1998
.
Spatial interpolation of daily meteorological data for the crop growth monitoring system (CGMS).
Proc. Data Spatial Distribution in Meteorology and Climatology, Volterra, Italy, Office for the Official Publications of the European Communities, 141–154
.
Varekamp
,
C.
,
A. K.
Skidmore
, and
P. B.
Burrough
.
1996
.
Using public domain geostatistical and GIS software for spatial interpolation.
Photogramm. Eng. Remote Sens.
62
:
845
854
.
Wahba
,
G.
1983
.
Bayesian ‘confidence intervals' for the cross validated smoothing spline.
J. Roy. Stat. Soc. B
45
:
133
150
.

Footnotes

Corresponding author address: Dr. C. H. Jarvis, Dept. of Geography, University of Edinburgh, Drummond Street, Edinburgh EH8 9XP, Scotland. chj@geo.ed.ac.uk