## 1. Introduction

Within the framework of trend analysis, simple linear least squares (LS) regression models are widely used and allow for an extrapolation of different atmospheric variables into the future (e.g., Born 1996; Dai et al. 1997; Zerefos et al. 2003; Norris 2005; Solomon et al. 2007). Although linear regression models have been used successfully, a number of difficulties arise with the conceptual framework of linear trend analysis and its applicability to problems of atmospheric and climatic science. The mathematical framework of linear LS regression analysis crucially depends on the assumptions of independent observations and normally distributed error terms with constant variance. However, these assumptions are violated in many applications, which potentially leads to unreliable results of the LS regression (von Storch and Zwiers 1999). Furthermore, statistical outliers in the data pose a problem for the LS trend estimation because the LS estimator can react very sensitively to outlying observations (e.g., Rousseeuw and Leroy 1987; Helsel and Hirsch 1992; Wilcox 1998; von Storch and Zwiers 1999; Trömel and Schönwiese 2005). The problem that a single outlying observation may suffice to severely influence the LS regression estimator makes the LS regression a nonrobust method. As a consequence of outliers in the data, trends may be biased or masked, which may lead to a different interpretation of the data. Statistical outliers also affect significance levels and have major implications for the reliability of confidence intervals and hypothesis tests (Rousseeuw and Leroy 1987; Wilcox 1998). Despite these well-known problems, linear (parametric) LS models are widely used in atmospheric and climatic research, and applications of nonparametric or robust parametric approaches are scarce (Huth and Pokorna 2004). They are more developed in other research fields, for example, robust signal extraction (Davies et al. 2004; Bernholt et al. 2006; Fried et al. 2006), hydrology (Barbosa et al. 2004), and chemistry (Ortiz et al. 1996; Daszykowski et al. 2007).

The main objectives of this paper are to demonstrate how the choice of the regression estimator can affect the results of trend estimation and the interpretation of trends in climatic science. We therefore draw examples from temperature and precipitation records in Switzerland and compare trend results from ordinary LS regression with trend estimates from robust parametric regression models as well as from standard nonparametric approaches.

## 2. Methods

### a. Classical linear regression

*n*×

*p*that contains the

*p*vectors of explanatory variables (each of length

*n*) and

**y**is the vector of response variables (e.g., von Storch and Zwiers 1999). The matrix 𝗫 is referred to as a design matrix, and the explanatory variables are often called the predictors of the model. The vector

**contains the**

*θ**p*(unknown) coefficients (also called parameters) of the linear model. The elements

*e*(1 ≤

_{i}*i*≤

*n*) of the error term are considered to satisfy a Gaussian distribution with mean

*μ*= 0 and unknown but constant variance

*σ*

^{2}.

Given the data (𝗫, **y**), one approximates the unknown parameters ** θ** by the estimated parameters

**so that the residuals**

*θ̂***r**=

**y**−

**ŷ**between the observed values

**y**and the estimated values

**ŷ**= 𝗫

**are minimized.**

*θ̂**L*

_{2}estimator), which corresponds to the minimization of the sum

*Q*of the

*n*squared residuals with respect to the coefficients

**:**

*θ̂***satisfying the minimization criterion (2) are given analytically by the normal equation (von Storch and Zwiers 1999)**

*θ̂*### b. Robust linear regression

In the presence of outliers, the breakdown point of the chosen estimator plays an important role (Rousseeuw and Leroy 1987; Rousseeuw and van Zomeren 1990). Following the definition of Hampel (1971) and Hodges (1967), the (finite sample) breakdown point of an estimator is the smallest fraction of contamination that may cause the estimates to take on values arbitrarily far away from the uncontaminated sample estimates. In other words, the breakdown point gives the maximum contamination the data may contain to still provide reliable estimates about the model parameters (coefficients) (Rousseeuw and Leroy 1987; Maronna et al. 2006). For the application, this means that the higher the breakdown value is, the more robust is the estimator. Rousseeuw (1984) showed that the breakdown point of the LS estimator is zero. Thus, a single outlier could perturb the linear trend crucially. Two options to overcome this lack of robustness are the least median of squares (LMS) estimator and the least trimmed squares (LTS) estimator, which are described next.

#### 1) The LMS estimator

#### 2) The LTS estimator

*h*smallest squared residuals of a subset

*h*out of

*n*data points (Rousseeuw 1984). Note that the residuals

*r*are first squared and then ordered according to their size. If the subset

_{i}*h*is chosen to be

*h*= (

*n*/2) + 1, the breakdown point of the LTS estimator is the same as for the LMS while the convergence rate is much higher (Rousseeuw 1984; Verboven and Hubert 2005). In contrast to the LMS estimator, the statistical efficiency of the LTS estimator is improved, which is a further benefit of this method (Rousseeuw and van Driessen 2006). For a comprehensive review of the mathematical properties of the LTS estimator, we refer to Agullo et al. (2008). Methods to develop confidence intervals can be found in appendix A or alternatively in Willems and van Aelst (2005).

Throughout this paper we use the freely available statistical software package R (see online at http://www.r-project.org/) to compute the LS, LMS, and LTS estimates for simple linear regression applications.

## 3. Comparison of robust linear regression with temperature and precipitation time series in Switzerland

In this section, the distinct regression estimators discussed in section 2 are applied to several temperature and precipitation time series in Switzerland for the period of 1864–2007. All time series are quality controlled and homogenized (Begert et al. 2003, 2005). Linear trend estimates based on the classical LS regression and on the robust regression are calculated and compared.

### a. Example of annual temperature trends

The time series of the annual mean temperature of the station Lugano serves as a first example to show how sensitively the classical LS trend model may react with respect to single or multiple statistically outlying observations. A brief discussion on the detection of outlying observations is given in appendix B. Figure 1a shows the time series of the annual mean temperature for the station Lugano for the period of 1864–2007.

The LS trend model reveals a linearly increasing trend of +0.8°C (100 yr)^{−1} over the given period of 1864–2007, which is highly significant as deduced from the classical *t* statistics (see Table 1). The 95% confidence intervals bracket the slope estimate for the LS within the bounds [0.0061, 0.0103], which correspond to a temperature increase between +0.6° and +1.0°C (100 yr)^{−1}. In contrast, the robust parametric LTS and LMS methods show a much weaker trend or almost no trend. The centennial increase in annual mean temperature for Lugano found by employing the LMS and LTS ranges from +0.01° to +0.6°C, respectively. Note that the robust solutions are not included within the LS 95% confidence bounds. Hence, the robust solutions from LMS and LTS trend lines are different and statistically distinguishable from the LS solution (see also Table 2).

The indicated outliers in annual mean temperature are the early years of the last century, the observations in the last two decades, and the recent years with heat records (e.g., Schär and Jendritzky 2004). The outliers are based on the 97.5th percentile of the normal distribution and are obtained from the robust standardized residuals *r _{i}*/

*σ** in Fig. 2, which is described in more detail in Appendix B. Thus, these outliers may be seen as the extreme events in the time series of Lugano. Note that the standardized residuals of the LS differ from the LMS residuals substantially in terms of outlier diagnosis. The LMS residuals show that the extreme observations in the early years and in the last years are influential points and attract the LS trend line crucially. In this case, the outlying observations yield an overestimation of the LS trend with respect to the trends given by the robust estimators. This example also illustrates the clear advantage of using the robust standardized residuals for detecting outlying observations in comparison with the classical standardized LS residuals. The residuals of the LS and LMS differ because the scale estimate

*σ̂*itself depends on the estimated trend line (and thus on the underlying regression estimator) and, hence, is a nonrobust measure.

The assumption of normally distributed error and homoscedasticity (constant variance) is not satisfied in this specific example (especially for the years after 1990) as can be seen more clearly from the LMS residuals or the normal probability plot (Figs. 2e,f). These violations in the model assumptions question not only the reliability of the estimates obtained by the classical LS method, but also the inferences such as the significance in terms of the *t* statistics, the coefficient of determination, or the confidence intervals.

The linearly increasing annual mean temperature trend for the station Bern (Fig. 1b) is affirmed independently by all three methods, suggesting that the warming observed at this station is a robust signal. The LS trend estimate gives a temperature increase of approximately +1.2°C (100 yr)^{−1}. The 95% confidence intervals are [0.0090, 0.0140] and, hence, bracket the centennial temperature increase between +0.9° and +1.4°C. Furthermore, the LS confidence intervals include the LMS and LTS solutions (see Table 2). In fact, the LMS and LTS slope estimates are very close to the LS slope estimate and yield temperature trends of +1.2° and +1.3°C (100 yr)^{−1}, respectively. Given the LS uncertainty range, one may interpret the robust trend estimates as statistically not distinguishable.

The LMS residuals unmask several observations as statistically outlying, which again influences the LS trend estimate toward these points (see also Fig. 2). Based on the LS residuals, many of the outliers would have been equally identified, although the observation 1868 would not have been identified from the LMS standardized residuals and, in fact, is not an extreme event. This observation attracts the LS trend estimate and may explain why the LS trend is slightly lower than the robust trends. Furthermore, it can be seen from the normal probability plot (see Fig. 2) that the assumption of normally distributed residuals is violated. However, the few outlying observations together with the violation of the model assumptions do only marginally affect the LS trend estimate in this example.

### b. Example of annual precipitation trends

In the second example, we compare the trends in the time series of annual precipitation for the stations Davos and Chaumont for the period of 1864–2007 (Fig. 3). All precipitation trends are subsequently given as percentage change per 100 years with respect to the 1961–90 average.

The LTS and LMS trend estimators both support a statistically significant (95% confidence level) increasing linear trend of approximately +8% for the annual precipitation in Davos (Fig. 3a). In contrast, the LS estimator only reveals a very weak trend in annual precipitation of approximately +2% (100 yr)^{−1} that is not statistically significant at the 95% confidence level. Note that the confidence intervals for the LS slope bracket the LS precipitation trend between −4% and +8%. Thus, the LS confidence intervals barely include the LMS and LTS solutions but would also allow for negative trends.

From the LMS and LS standardized residuals shown in Fig. 4 several statistically outlying observations can be identified. However, a subset of outliers between 1860 and 1930 that is only identified based on the LMS standardized residuals influences the LS trend line remarkably and, thus, masks the increasing precipitation trend of the station Davos.

The positive precipitation trend for 1864–2007 estimated for the station Chaumont (Fig. 3b) can be reproduced by all three methods and, hence, is a robust trend. Again, as in the case of the temperature from the previous example, the slope estimates and intercept values differ slightly for the different regression methods and bracket the precipitation increase between +8% (LS), +10% (LMS), and +11% (LTS) per century.

The 95% confidence bounds for the LS slope include the LTS and LMS solution. However, several dry years that yield outlying observations influence the LS trend estimate and may explain why the LS estimator underestimates the precipitation trend with respect to the robust estimators.

## 4. Discussion and conclusions

The results of section 3 show that the LS estimator can react very sensitively to outlying observations and, thus, can affect trend estimation results and interpretation. In general, the influence of these statistical outliers on the LS estimator tends to be higher toward the boundaries than in the center part of the time series. This general and problematic feature is especially persistent and dangerous in temperature trends in which a strong increase has been observed during the last two decades. Future climate scenarios also suggest an increase in the variability and an increase of rare and extreme events such as heat waves and heavy precipitation (Katz and Brown 1992; Schär et al. 2004; Seneviratne et al. 2006). The occurrence of such extreme events in turn affects the amount of statistically outlying observations. Our examples demonstrate the vulnerability of the LS estimator to these outlying observations and emphasize the necessity of using robust estimators in climatic science. Because robust parametric estimators such as the LTS or LMS are not easily biased in the slope estimate (Davies et al. 2004), we encourage the use of robust estimators in climate-related work to reduce the effect of outliers on trend estimates.

We also compared the classical LS and robust trends against trends derived from nonparametric approaches such as the Spearman rank correlation coefficient (SRCC; Sachs 1984; Hess et al. 2001), the Mann–Kendall test (Gilbert 1987), and Sen’s nonparametric estimate of slope (Sen 1968; Hollander and Wolfe 1973). In many cases the trends found with parametric and nonparametric methods are very similar and are mostly included within the 95% confidence interval of the LS, which is a result also found by Huth and Pokorna (2004). The SRCC and the Mann–Kendall test indicate a positive trend in all examples with a high level of confidence. In qualitative terms, these nonparametric trends correspond to the trend signs found with the LS method, which questions the reliability of the SRCC and the Mann–Kendall test in the presence of outlying observations. Sen’s slope estimator tends in many cases more toward the trend estimates of the robust methods (when compared with the LS trend) and corroborates the trend signs and magnitudes found by applying the robust parametric methods.

Some of our examples from section 3 raise the more general question of whether linear trend models are adequate in applications of climatic science. In particular, temperature time series often show a considerable amount of nonlinearity. For the annual mean temperature the robust methods give a trend that differs remarkably from the LS trend estimate. This suggests that the LS trend estimate is severely attracted by the data points that belong to the warmer period from 1980 to 2007 whereas the robust methods are only weakly influenced. Because the data points in this period are not outliers but rather belong to another population, the linear trend models may not properly represent the variability in the data. In contrast, nonlinear methods (e.g., Miksovsky and Raidl 2006) may be the better choice to account for the variability and support the upward temperature trend in the late twentieth century.

In conclusion, the comparisons of ordinary and robust regression methods show that outlying observations may bias the LS trend estimate and lead to over-/underestimation of trends or even trend masking. Hence, trend estimation results and interpretation can be affected, which suggests the use of robust estimators. Based on our findings, the benefits of using robust parametric regression methods are twofold. First, robust parametric regression methods may be used in addition to the classical LS method to check its reliability and reproducibility. Second, the robust standardized residuals provide a useful and simple diagnostic tool to identify outlying observations more reliably than with the standardized residuals of the classical LS approach.

## Acknowledgments

We thank the anonymous reviewers for their valuable comments and suggestions. We are grateful to W. Stahel, M. Mächler (ETH Seminar für Statistik), and A. Ruckstuhl (Zürcher Hochschule Winterthur) for helpful discussions. MeteoSwiss is kindly acknowledged for sharing the data.

## REFERENCES

Agullo, J., C. Croux, and S. van Aelst, 2008: The multivariate least-trimmed squares estimator.

,*J. Multivariate Anal.***99****,**311–338.Barbosa, S. M., M. J. Fernandes, and M. E. Silva, 2004: Nonlinear sea level trends from European tide gauge records.

,*Ann. Geophys.***22****,**1465–1472.Begert, M., G. Seiz, T. Schlegel, M. Musa, G. Baudraz, and M. Moesch, 2003: Homogenisierung von Klimareihen der Schweiz und Bestimmung der Normwerte 1961-1990; Schlussbericht des Projekts NORM90 (Homogenization of climatic series of Switzerland and determination of the standard values 1961-1990; final report of the project NORM90). Tech. Rep. 67, MeteoSchweiz, Zürich, Switzerland, 170 pp.

Begert, M., T. Schlegel, and W. Kirchhofer, 2005: Homogeneous temperature and precipitation series of Switzerland from 1864 to 2000.

,*Int. J. Climatol.***25****,**65–80.Bernholt, T., R. Fried, U. Gather, and I. Wegener, 2006: Modified repeated median filters.

,*Stat. Comput.***16****,**177–192.Born, K., 1996: Tropospheric warming and changes in weather variability over the northern hemisphere during the period 1967-1991.

,*Meteor. Atmos. Phys.***59****,**201–215.Dai, A., I. Y. Fung, and A. D. Del Genio, 1997: Surface observed global land precipitation variations during 1900–88.

,*J. Climate***10****,**2943–2961.Daszykowski, M., K. Kaczmarek, Y. V. Heyden, and B. Walczak, 2007: Robust statistics in data analysis—A review.

,*Chemometr. Intell. Lab.***85****,**203–219.Davies, P., R. Fried, and U. Gather, 2004: Robust signal extraction for on-line monitoring data.

,*J. Stat. Plan. Infer.***122****,**65–78.Draper, N., and H. Smith, 1966:

*Applied Regression Analysis*. John Wiley and Sons, 407 pp.Fried, R., T. Bernholt, and U. Gather, 2006: Repeated median and hybrid filters.

,*Comput. Stat. Data Anal.***50****,**2313–2338.Gervini, D., and V. J. Yohai, 2002: A class of robust and fully efficient regression estimators.

,*Ann. Stat.***30****,**583–616.Gilbert, R., 1987:

*Statistical Methods for Environmental Pollution Monitoring*. Van Nostrand Reinhold, 320 pp.Hampel, F., 1971: A general qualitative definition of robustness.

,*Ann. Math. Stat.***42****,**1887–1896.Hampel, F., 1975: Beyond location parameters: Robust concepts and methods.

,*Bull. Int. Stat. Inst.***46****,**375–382.Helsel, D. R., and R. M. Hirsch, 1992:

*Statistical Methods in Water Resources*. 1st ed. Elsevier, 522 pp.Hess, A., H. Iyer, and W. Malm, 2001: Linear trend analysis: A comparison of methods.

,*Atmos. Environ.***35****,**5211–5222.Hodges, J., 1967: Efficiency in normal samples and tolerance of extreme values for some estimates of location.

*Proc. Fifth Berkeley Symp. on Mathematical Statistics and Probability,*Vol. 1, University of California, Berkeley, 163–168.Hollander, M., and I. Wolfe, 1973:

*Nonparametric Statistical Methods*. John Wiley and Sons, 503 pp.Huth, R., and P. Pokorna, 2004: Parametric versus non-parametric estimates of climatic trends.

,*Theor. Appl. Climatol.***77****,**107–112.Katz, R. W., and B. Brown, 1992: Extreme events in changing climate: Variability is more important than averages.

,*Climatic Change***21****,**289–302.Maronna, R. A., R. D. Martin, and V. J. Yohai, 2006:

*Robust Statistics*. John Wiley and Sons, 403 pp.Miksovsky, J., and A. Raidl, 2006: Testing for nonlinearity in European climatic time series by the method of surrogate data.

,*Theor. Appl. Climatol.***83****,**21–33.Norris, J., 2005: Trends in upper-level cloud cover and surface divergence over the tropical Indo-Pacific Ocean between 1952 and 1997.

,*J. Geophys. Res.***110****,**D21110. doi:10.1029/2005JD006183.Ortiz, M. C., J. L. Palacios, L. A. Sarabia, M. G. Piangerelli, and D. Cingolani, 1996: Regression by least median squares in the calculation of transition times for calibration in chronopotentiometry.

,*Electroanalysis***8****,**927–931.Pison, G., S. V. Aelst, and G. Willems, 2002: Small sample corrections for LTS and MCD.

,*Metrika***55****,**(1–2). 111–123.Rousseeuw, P., 1984: Least median of squares regression.

,*J. Amer. Stat. Assoc.***79****,**871–880.Rousseeuw, P., and A. Leroy, 1987:

*Robust Regression and Outlier Detection*. John Wiley and Sons, 329 pp.Rousseeuw, P., and B. van Zomeren, 1990: Unmasking multivariate outliers and leverage points.

,*J. Amer. Stat. Assoc.***85****,**633–639.Rousseeuw, P., and M. Hubert, 1997: Recent developments in PROGRESS.

*L*Y. Dodge, Ed., Lecture Notes–Monograph Series, Vol. 31, Institute of Mathematical Statistics, 201–214._{1}-Statistical Procedures and Related Topics,Rousseeuw, P., and K. van Driessen, 2006: Computing LTS regression for large data sets.

,*Data Min. Knowl. Discovery***12****,**29–45.Sachs, L., 1984:

*Angewandte Statistik (Applied Statistics)*. Springer, 552 pp.Schär, C., and G. Jendritzky, 2004: Hot news from summer 2003.

,*Nature***432****,**559–560.Schär, C., P. L. Vidale, D. Lüthi, C. Frei, C. Häberli, M. A. Liniger, and C. Appenzeller, 2004: The role of increasing temperature variability in European summer heatwaves.

,*Nature***427****,**332–336.Sen, P. K., 1968: Estimates of the regression coefficients based on Kendall’s tau.

,*J. Amer. Stat. Assoc.***63****,**1379–1389.Seneviratne, S., D. Lüthi, M. Litschi, and C. Schär, 2006: Land–atmosphere coupling and climate change in Europe.

,*Nature***443****,**205–209.Solomon, S., D. Qin, M. Manning, M. Marquis, K. Averyt, M. M. B. Tignor, H. L. Miller Jr., and Z. Chen, Eds. 2007:

*Climate Change 2007: The Physical Sciences Basis*. Cambridge University Press, 996 pp.Trömel, S., and C. Schönwiese, 2005: A generalized method of time series decomposition into significant components including probability assessments of extreme events and application to observational German precipitation data.

,*Meteor. Z.***14****,**417–427.Verboven, S., and M. Hubert, 2005: LIBRA: A MATLAB library for robust analysis.

,*Chemometr. Intell. Lab.***75****,**127–136.von Storch, H., and F. Zwiers, 1999:

*Statistical Analysis in Climate Research*. Cambridge University Press, 484 pp.Wilcox, R. R., 1998: A note on the Theil-Sen regression estimator when the regressor is random and the error term is heteroscedastic.

,*Biom. J.***40**(3) 261–268.Willems, G., and S. van Aelst, 2005: Fast and robust bootstrap for LTS.

,*Comput. Stat. Data Anal.***48****,**703–715.Zerefos, C., K. Eleftheratos, D. Balis, P. Zanis, G. Tselioudis, and C. Meleti, 2003: Evidence of impact of aviation on cirrus cloud formation.

,*Atmos. Chem. Phys.***3****,**1633–1644.

## APPENDIX A

### Construction of Confidence Intervals

#### LS estimator

*j*th estimate

*θ̂*(1 ≤

_{j}*j*≤

*p*) may be constructed by calculating the scale estimate

*σ̂*

^{2}(estimated variance) from the residuals

**r**:

*σ̂*

^{2}(𝗫

^{T}𝗫)

^{−1}the (1 −

*α*) × 100% confidence bounds for the estimate

*θ̂*are (e.g., Rousseeuw and Leroy 1987)

_{j}*t*

_{1−α/2,n−p}denotes the 1 −

*α*/2 quantile of a Student’s distribution with

*n*−

*p*degrees of freedom and the probability of error

*α*. The subscript

*jj*denotes the

*j*th diagonal element of the variance–covariance matrix. Note that the construction of the confidence intervals involves the assumption of independent and normally distributed error terms.

#### LMS estimator

*α*, one may compute the (1 −

*α*) × 100% confidence intervals for the

*j*th LMS estimates

*θ̂*

_{j,LMS}(1 ≤

*j*≤

*p*) in a manner similar to (A2):

*r*(1 ≤

_{i}*i*≤

*n*) obtained from the LMS regression (Rousseeuw and Leroy 1987; Rousseeuw and van Zomeren 1990);

*V*(LMS,

*F*) denotes the asymptotic variance, which depends on the chosen estimator and the error statistics. Here, the estimator is the LMS and

*F*denotes the true cumulative distribution function (cdf) of the error with

*f*being the corresponding probability density function (pdf). If one assumes, for example, a normal distributed error, then

*F*is the normal cumulative distribution Φ and

*f*is the normal probability density function

*ϕ*. In this case and for

*n*= 100 the asymptotic variance of the LMS is

*V*(LMS, Φ) = 17.74 (Rousseeuw and Leroy 1987, p. 191).

*σ** is computed:

*σ** consistent with the Gaussian model (Rousseeuw and Hubert 1997). Furthermore, the scale estimate

*σ** is used to compute the standardized residuals

*r*/

_{i}*σ** and to assign weights to the

*i*th observations such that

*w*= 1 if |

_{i}*r*/

_{i}*σ**| ≤ 1.96 and

*w*= 0 otherwise. Note that the choice of setting the limit 1.96 is arbitrary and corresponds to the 97.5th percentile of a normal distribution with mean

_{i}*μ*= 0 and variance

*σ*

^{2}= 1. Thus, we might expect (assuming normal distribution is given) that 97.5% of the standardized residuals are contained within the interval [−1.96, 1.96]. The robust LMS scale estimate

*σ̂*

_{LMS}is computed such that

#### LTS estimator

*α*, the (1 −

*α*) × 100% confidence intervals for the

*j*th LTS estimates

*θ̂*

_{j,LTS}(1 ≤

*j*≤

*p*) are computed such that

*V*(LTS,

*F*) is the asymptotic variance of the LTS estimator with a cdf of the error according to

*F*. For a normally distributed error,

*F*is the normal cdf Φ with corresponding normal pdf

*ϕ*. The asymptotic variance of the LTS

*V*[LTS(

*β*), Φ] with breakdown 0 ≤

*β*≤ 0.5 is then given by D. J. Olive (2008, unpublished manuscript, p. 238, available online at http://www.math.siu.edu/olive/ol-bookp.htm):

*V*[LTS(

*β*), Φ] of the LTS estimator is equal to unity and the LTS estimator has the same zero breakdown as the classical LS estimator. For a breakdown value of

*β*= 0.5, the asymptotic variance of the LTS estimator is obtained from (A7) to be

*V*[LTS(0.5), Φ] = 14.02, which corresponds closely to the value given by Rousseeuw and Leroy (1987, p. 191).

*d*

_{h,n}is a constant that depends only on the sample size

*n*and the size of the subset

*h*≤

*n*of the LTS regression to make the LTS scale estimate

*n*≤ 30), the LTS estimator may underestimate the scale of the residuals and thus may flag too many observations as outlying. To overcome this problem a correction factor may be applied to (A10) as proposed by Pison et al. (2002). With the preliminary scale estimate

*σ̂*

_{LTS}may be computed in a similar manner to the robust LMS scale estimate

*σ̂*

_{LMS}according to (A5).

An alternative and nonparametric approach for the construction of robust LTS confidence intervals that is independent of the underlying error distribution is given by the robust bootstrap method discussed in Willems and van Aelst (2005).

## APPENDIX B

### Outlier Detection

For detecting statistical outliers we examine the standardized residuals. We plot the standardized residuals *r _{i}*/

*σ̂*for the LS (Figs. 2a,b and 4a,b) and the standardized residuals

*r*/

_{i}*σ** for the LMS (Figs. 2c,d and 4c,d) against the explanatory variable as suggested by Draper and Smith (1966). For the standardized LMS residuals, the robust scale estimate

*σ** is computed as described in appendix A [(A4)]. Observations are classified as outliers if the values of

*r*/

_{i}*σ** taken from the standardized LMS residuals exceed the limit of 1.96, which corresponds to the 97.5th percentile of a normal distribution with mean

*μ*= 0 and variance

*σ*

^{2}= 1. The normal probability plots are shown in Figs. 2e,f and 4e,f. The trend estimates as well as the corresponding statistics for accepting or rejecting the null hypothesis (no trend) are shown in Table 1.

Note that to be significant at the 95% level the absolute value of *t*(slope) has to exceed the critical value of the 97.5th percentile of a Student’s distribution with *n* − *p* degrees of freedom, which is roughly *t _{c}*(0.975,

*n*−

*p*) = 2 in all of the examples. The value of

*p*(slope), which is the probability of erroneously rejecting the null hypothesis, has to be lower than

*α*= 0.05.

LS trend estimates.

LMS and LTS trend estimates. The given *t* and *p* values refer to the LTS slope estimates.