## 1. Introduction

In December of 1997, 160 nations having more than 1400 delegates met in Kyoto, Japan, at the United Nations Framework Convention on Climate Change to discuss the “global-warming problem.” The participation of such a large number of countries and delegates emphasizes the growing international concern about the possibility of global warming. By now, there is a growing consensus that global temperatures have been on the rise systematically for the past 100–150 yr. This consensus has grown in part out of statistical studies of global temperature time series that have indicated positive, systematic, and statistically significant trends in these series. These include, for example, Bloomfield (1992), Bloomfield and Nychka (1992), and Zheng and Basher (1999). On the other hand, several authors, including Gordon (1991) and Woodward and Gray (1993, 1995) have pointed out that a statistically significant trend can be spuriously generated when the data have a unit root in the random component. They argue that global temperature data might be well represented by time series models with no trend but with a unit root (or near unit root) in the random component, in which case the recent warming would not be expected to continue systematically into the future.

In this paper we apply a test recently proposed by Vogelsang (1998) that directly controls for the possibility of a spurious trend resulting from a unit root or near–unit root in the data. Although the tests proposed by Vogelsang (1998) were originally designed with economic time series data in mind, they are natural candidates for application to global temperature data. Our empirical results indicate that there is considerable *robust* evidence suggesting that a positive and statistically significant trend is present in global temperature series. Note that we do not attempt to explain the causes of these trends in the temperature series. Our modest goal is simply to help to resolve some of the remaining debate as to whether the recent increase in global temperatures is systematic or purely random.

The basic statistical question as to whether global temperature series have a significant positive trend has nothing to do, per se, with the serial correlation properties of the temperature data. A time series with a positive deterministic trend could have errors (i) without serial correlation, (ii) with mild serial correlation (autoregressive roots not close to one), (iii) with strong serial correlation (at least one autoregressive root close to one), or (iv) with a unit root. While the four cases generate very different looking time series, all of them would be consistent with systematic global warming. Uncertainty regarding the serial correlation structure boils down to a statistical “nuisance parameter” issue that affects inference regarding the deterministic trend parameters. Trend tests constructed for case (i) are invalid for cases (ii)–(iv). Trend tests constructed under the assumption of stationary errors remain invalid for case (iv) but are valid (in large samples) for cases (i)–(iii). Unfortunately, in finite samples, such tests may give spurious evidence in case (iii). Trend tests designed explicitly for case (iv) usually do not give spurious evidence in cases (i)–(iii), but they often suffer from low power. On the other hand, the trend test used in this note is valid for all four cases and is therefore very robust.

Some authors have argued that global temperature data may exhibit long memory and that fractionally integrated errors are appropriate. These include, for example, Bloomfield (1992) and Bloomfield and Nychka (1992). Trend tests designed for case (ii) can also give spurious results when errors have long memory. Although not designed explicitly for long-memory errors, the trend test used here remains fairly robust to long-memory errors.

The remainder of the note is organized as follows. In the next section we describe the basic statistical model and review well-known inference methods. A small Monte Carlo experiment is used to illustrate some of the potential pitfalls of the standard tests. In section 3 we introduce and briefly discuss the robust trend test proposed by Vogelsang (1998). Some additional Monte Carlo results are provided to illustrate the robustness properties. In section 4 we describe the global temperature data used in the note and report the empirical results. Section 5 gives some concluding remarks.

## 2. The statistical model and inference

*y*

_{t}

*β*

_{1}

*β*

_{2}

*t*

*u*

_{t}

*t*

*T.*

*β*

_{1}+

*β*

_{2}

*t*approximates the systematic component of global temperatures, whereas the random error term

*u*

_{t}approximates the natural variation in temperatures over time. Statistical evidence of global warming is indicated by a positive value of

*β*

_{2}, and

*β*

_{2}measures the average change in temperatures per year (assuming annual data). We take the null hypothesis to be no global warming and the alternative hypothesis to be global warming, which can be written asClearly, aggregate temperature data are serially correlated over time. Therefore, it must be assumed that

*u*

_{t}has serial correlation. Note, however, that the serial correlation structure of

*u*

_{t}is not directly relevant to the null hypothesis being tested. Whether

*u*

_{t}follows an autoregressive moving-average (ARMA) process with or without an autogressive root close to one, or alternatively, a long-memory process, the null and alternative hypotheses of interest remain Eq. (2.2). Where the serial correlation structure of

*u*

_{t}matters is in the construction of test statistics for

*H*

_{0}. This is true because the sampling distributions of typical estimators of

*β*

_{2}depend on the serial correlation structure of

*u*

_{t}. See Bloomfield and Nychka (1992) for examples.

Suppose that *u*_{t} is a zero-mean second-order stationary time series with autocovariance function *γ*_{j} = cov(*u*_{t}, *u*_{t−j}). In this case, it follows from the classic results of Grenander and Rosenblatt (1957) that the ordinary least squares (OLS) estimates of *β*_{1} and *β*_{2} based on Eq. (2.1) are asymptotically equivalent to the generalized least squares (GLS) estimates assuming the form of the serial correlation is known.

*u*

_{t}follows the ARMA model

*A*(

*L*)

*u*

_{t}=

*B*(

*L*)

*ξ*

_{t}, where

*ξ*

_{t}is an independent and identically distributed (iid) error process with mean zero and variance

*σ*

^{2}

_{ξ}

*L*is the lag operator,

*Lu*

_{t}=

*u*

_{t−1}, and

*A*(

*L*) = 1 −

*a*

_{1}

*L*−

*a*

_{2}

*L*

^{2}− · · · −

*a*

_{p}

*L*

^{p},

*B*(

*L*) = 1 +

*b*

_{1}

*L*+

*b*

_{2}

*L*

^{2}+ · · · +

*b*

_{q}

*L*

^{q}. Provided that

*B*(

*L*)

^{−1}exists, the GLS estimate of

*β*

_{2}would be obtained from nonlinear least squares estimation of the regressionIf, in addition it is reasonable to assume that

*ξ*

_{t}is normally distributed, then the GLS estimate of

*β*

_{2}from Eq. (2.3) is the maximum likelihood estimator (MLE). The advantage of OLS over GLS or MLE is that OLS does not require specification of the lag polynomials,

*A*(

*L*) and

*B*(

*L*). Because of the large sample equivalence of OLS and GLS/MLE, there is no loss in efficiency if the OLS estimate of

*β*

_{2}is used for testing Eq. (2.2).

*β*

_{2}based on Eq. (2.1) is given by the usual formulawhere

*t*

*T*

^{−1}

^{T}

_{t=1}

*t.*It is well known from Grenander and Rosenblatt (1957) that in large sampleswhere

*σ*

^{2}=

*γ*

_{0}+ 2

^{∞}

_{j=1}

*γ*

_{j}< ∞. As long as an estimate of

*σ*

^{2}can be obtained, asymptotically valid standard errors can be computed for

*β̂*

_{2}and a

*t*statistic can be constructed.

*σ*

^{2}is proportional to the spectral density of

*u*

_{t}at frequency zero, there are many possible estimators of

*σ*

^{2}to choose from. See Bloomfield and Nychka (1992) or Woodward and Gray (1993) for examples used in the global-warming literature. Here we consider the class of nonparametric estimators (see Priestley 1981) of

*σ*

^{2}given bywhere

*û*

_{t}are the OLS residuals from Eq. (2.1),

*k*(

*x*) is a kernel or weighting function, and

*M*is the truncation lag or bandwidth. In what follows, we use the Bartlett kernel defined as

*k*(

*x*) = 1 − |

*x*| for |

*x*| ≤ 1 and

*k*(

*x*) = 0 otherwise. The truncation lag

*M*is chosen using the data-dependent autoregressive lag-one [AR(1)] plug-in formula given by Andrews (1991). Using

*σ̂*

^{2}, a standard error for

*β̂*

_{2}can be computed using the formulawhich leads to the

*t*statistic

*t*

_{HAC}

*β̂*

_{2}

*β̂*

_{2}

*t*statistic is valid for errors with conditional heteroscedasticity and autocorrelation of unknown form. Under standard regularity conditions, including stationarity of

*u*

_{t}, it follows that

*t*

_{HAC}

*d*

*N*(0, 1). The null hypothesis given by Eq. (2.2) can be tested in the usual way using

*t*

_{HAC}.

As long as the serial correlation in the model is not too strong, statistics like *t*_{HAC} perform well in practice. However, as pointed out by Woodward and Gray (1993), Vogelsang (1998), and others, statistics like *t*_{HAC} can perform poorly in practice when the serial correlation in the errors is strong. More specifically, if the errors can be modeled as an ARMA process, then under the null hypothesis *t*_{HAC} will tend to overreject when one of the autoregressive roots of *u*_{t} is close to or equal to 1. In fact, the distribution theory underpinning the *t*_{HAC} statistic breaks down when there is a unit root in the errors.

*t*

_{HAC}also breaks down if the errors have long memory. Consider the model

*L*

^{d}

*u*

_{t}

*υ*

_{t}

*υ*

_{t}is a stationary process with finite spectral density at frequency zero. The process given by Eq. (2.4) is called a fractionally integrated process and is often labeled an

*I*(

*d*) process. When 0 <

*d*< 0.5,

*u*

_{t}is stationary but has long memory because the autocovariances

*γ*

_{j}slowly decay as

*j*increases. Because of this slow decay,

*u*

_{t}has an unbounded spectral density at frequency zero. It is for this reason that the distribution theory for

*t*

_{HAC}breaks down when the errors have long memory. Of course, one could estimate the trend model under the assumption of long-memory errors and construct appropriate tests (see Bloomfield and Nychka 1992). We do not follow this approach in this paper given that the focus is on robust tests that do not require a fully parametric specification of the error model.

*u*

_{t}according to the two simple processeswhere

*u*

_{0}= 0, and

*e*

_{t}is iid

*N*(0, 1). Equation (2.5) is an AR(1) error model whereas Eq. (2.6) is a simple

*I*(

*d*) error model. The errors are stationary for |

*α*| < 1 and

*d*< 0.5, respectively.

*β*

_{1}=

*β*

_{2}= 0. We can set

*β*

_{1}= 0 because

*t*

_{HAC}does not depend on the true value of

*β*

_{1}(

*t*

_{HAC}is exactly invariant to

*β*

_{1}). We used a sample size of

*T*= 129 that is similar to the length of annual global temperature series (see section 4). We report results for

*α*= 0.0, 0.2, 0.4, 0.6, 0.8, 0.9, 0.95, 1.0, and

*d*= 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.8, 1.0. We used 10 000 replications and computed empirical rejection probabilities at the 5% nominal level (for a one-tailed test) using the standard normal critical value. The results are tabulated in Table 1. For comparison we also report results for a

*t*statistic based on the MLE estimator of

*β*

_{2}assuming the errors follow Eq. (2.4). We denote this statistic by

*t*

_{AR(1)}. The MLE estimator was obtained using nonlinear least squares estimation of the regression:

*y*

_{t}

*β*

_{1}

*α*

*β*

_{2}

*α*

*β*

_{2}

*α*

*t*

*αy*

_{t−1}

*e*

_{t}

The table clearly illustrates that for small values of *α,* the *t*_{HAC} and *t*_{AR(1)} statistics perform reasonably well with rejection probabilities close to 0.05. However, as *α* increases and approaches 1, rejection probabilities increase to well above 0.05. In a similar fashion, if the errors are *I*(*d*) and *d* is not close to zero, both tests tend to overreject. The overrejection problem becomes more pronounced as *d* increases. If the serial correlation in the global temperature data is strong (*α* near 1 or *d* > 0), then it is theoretically possible that one could obtain spurious evidence of a significant positive trend.

It is important to note that the problem here is not isolated to the case where *α* is exactly 1 or *d* ≥ 0.5. If problems only occurred in the nonstationary cases, they could be deemed irrelevant for temperature data since nonstationarity can be ruled out on scientific grounds. However, the spurious rejection problem is still relevant for temperature data because it can occur for values of *α* near 0.8 or 0.9 or 0.2 ≤ *d* ≤ 0.4, which are clearly in the stationary region.

So, while there is considerable empirical evidence that suggests global temperature series have a significant positive trend, skeptics can point to the overrejection problem caused by strong serial correlation to cast some doubt. In the next section, we introduce a test statistic that does not have the overrejection problem in the presence of strong serial correlation or even a unit root in the data. This test also remains fairly robust to *I*(*d*) errors.

## 3. A robust trend test

In this section we review the robust trend test statistic, labeled *t* − PS_{T}, recently proposed by Vogelsang (1998). The statistic is defined shortly. Like the *t*_{HAC} statistic and similar statistics, *t* − PS_{T} is robust to the form of serial correlation in *u*_{t}. Unlike *t*_{HAC}, *t* − PS_{T} does not overreject when the serial correlation is strong or if there is a unit root in the errors. In addition, the *t* − PS_{T} statistic has the very practical and useful property that it does not require an estimate of *σ*^{2}. Thus, the subjective choices such as weighting kernel and truncation lag do not have to be made by the empirical researcher.

*t*− PS

_{T}test statistic is based on the following regression that is obtained by computing partial sums of the original data,

*y*

_{t}:where

*z*

_{t}=

^{t}

_{j=1}

*y*

_{j},

*S*

_{t}=

^{t}

_{j=1}

*u*

_{j}, and the regressors are obtained from the formulas

*β̃*

_{2}denote the OLS estimate of

*β*

_{2}obtained from Eq. (3.1). Because

*S*

_{t}has a unit root by construction,

*β̃*

_{2}has a larger sampling variability than

*β̂*

_{2}since the Grenander–Rosenblatt result does not apply to

*β̃*

_{2}. The advantage of

*β̃*

_{2}over

*β̂*

_{2}for testing is that more robust tests can be constructed using

*β̃*

_{2}. Therefore, efficiency is sacrificed for robustness.

*b*be a nonrandom number. The choice of

*b*is discussed below. The

*t*− PS

_{T}test is defined as

*t*

_{T}

*t*

^{*}

_{z}

*bJ*

_{T}

*t*

^{*}

_{z}

*T*

^{−1/2}

*t*

_{z}with

*t*

_{z}being the standard OLS

*t*statistic for testing

*β*

_{2}= 0, in regression Eq. (3.1) (i.e.,

*t*

_{z}is the

*t*statistic that would be automatically computed by standard regression packages under the implicit assumption of iid errors), and

*J*

_{T}

_{y}

_{J}

_{J}

_{y}denotes the OLS residual sum of squares from regression (2.1), and RSS

_{J}denotes the OLS residual sum of squares from the regressionNote that [(T − 10)/8]

*J*

_{T}is the

*F*test for testing the hypothesis

*β*

_{3}=

*β*

_{4}= · · · =

*β*

_{10}= 0 in regression Eq. (3.2). Vogelsang (1998) recommended that the polynomial order be 9 in Eq. (3.2) because power of the

*t*− PS

_{T}test is an increasing function of the polynomial order, but the increase in power is negligible for polynomial orders greater than 9.

When computing the *t* − PS_{T} test, the value used for *b* depends on the significance level. Vogelsang (1998) showed that, for a given significance level, *b* can be found such that the asymptotic critical value of *t* − PS_{T} is the same when the errors are stationary and when the errors have a unit root. This property ensures that *t* − PS_{T} will not tend to overreject when the errors have strong serial correlation. Vogelsang (1998) derived the asymptotic distribution of *t* − PS_{T} and computed asymptotic critical values and the corresponding *b* values using Monte Carlo simulation methods. For percentage points 1%, 2.5%, 5%, and 10% the asymptotic critical values and *b*s (in the parentheses) are 2.647 (1.501), 2.152 (0.995), 1.720 (0.716), and 1.331 (0.494).

Vogelsang (1998) provided extensive theoretical and simulation evidence showing that the *t* − PS_{T} statistic does not suffer from overrejection problems as the serial correlation in the errors becomes strong. He also showed *t* − PS_{T} has good power. To illustrate this robustness of *t* − PS_{T}, empirical rejection probabilities for *t* − PS_{T} were computed in the same simulations discussed above in section 2. Rejections were calculated using the asymptotic 5% critical of 1.720 and *b* = 0.716. We also report rejection probabilities for the *t*^{*}_{z}*t* − PS_{T} statistic implemented using *b* = 0.

These results are reported in Table 1. In the case of AR(1) errors, empirical rejection probabilities for *t* − PS_{T} are always near or below 0.05 even when *α* is close to or equal to 1. In fact, *t* − PS_{T} tends to be conservative when *α* is close to 1. On the other hand, if *t*^{*}_{z}*α* close to 1. This illustrates the importance of the exp(−*bJ*_{T}) component for the performance of the *t* − PS_{T} statistic. If the errors are fractionally integrated, *t* − PS_{T} does tend to overreject somewhat. However, the tendency to overreject is much less severe than for the *t*_{HAC}, *t*_{AR(1)}, and *t*^{*}_{z}*t* − PS_{T} statistic indicates that a global temperature series has a positive and significant trend, then this would be more robust evidence than has been previously obtained in the climate literature.

Because the *t* − PS_{T} test is relatively new and not widely known, some readers may benefit from the following brief discussion of some of the properties of the test and the rationale for its components. Vogelsang (1998) should be consulted for additional technical details. Readers not interested in this discussion can jump to section 4 for the empirical results.

The first component of *t* − PS_{T}, *t*^{*}_{z}*t*^{*}_{z}*σ*^{2} < ∞, *t*^{*}_{z}*t*^{*}_{z}*σ*^{2}. However, as illustrated above, *t*^{*}_{z}*t*^{*}_{z}*t*_{HAC} statistic, *t*^{*}_{z}*bJ*_{T}).

The *J*_{T} statistic was proposed by Park and Choi (1988) and Park (1990) for testing the null hypothesis that the errors in Eq. (2.1) have an autoregressive unit root. The *J*_{T} test is a left-tailed test where a unit root in the errors is rejected for small values of *J*_{T}. Park and Choi (1988) showed that, when the errors have a unit root, *J*_{T} has a well-defined asymptotic distribution free of nuisance parameters that is similar to a chi-square distribution. And, they showed that, when the errors are stationary, *J*_{T} converges to zero in which case exp(−*bJ*_{T}) converges to 1. Thus, *t* − PS_{T} and *t*^{*}_{z}

When the errors have a unit root, *J*_{T} takes on large positive values and thus exp(−*bJ*_{T}) takes on small positive values provided *b* > 0. Therefore, exp(−*bJ*_{T}) can be used to shrink the distribution of *t* − PS_{T} when the errors have a unit root. There does not exist a value of *b* so that the asymptotic distribution of *t* − PS_{T} is the same for both stationary errors and unit root errors. However, for a given percentage point, it is possible to compute the value of *b* such that the asymptotic critical values of *t* − PS_{T} are the same for both stationary and unit root errors. Because the critical value is the same whether the errors are stationary or have a unit root, the overrejection problem does not occur.

## 4. Analysis of the global warming data

In this section we test the hypotheses in Eq. (2.2) for six annual global temperature series. The starting and ending dates of the series vary but all series start in the late 1800s and end in the late 1900s. All series are measured in degrees Celsius relative to a reference period average. For each series the labels, spans, and sources are as follows:

- JWB: 1854–1993 (relative to 1950–79 average), Jones et al. (1994);
- JWBE: The JWB series adjusted for the influence of El Niño–Southern Oscillation events as reported by Jones et al. (1994);
- WH: 1880–1993 (relative to 1951–80 average), Wilson and Hansen (1994);
- VGL: 1881–1993 (relative to 1951–75 average), Vinnikov et al. (1994);
- JP: 1860–2000 (relative to 1961–90 average), Jones et al. (1999) and Parker et al. (1995), downloaded from the Web site online at www.meto.govt.uk/research/hadleycentre/obsdata/ (HadCRUG series);
- TaveGL: 1856–2000 (relative to 1961–90 average), downloaded from the Web site online at www.cru.uea.ac.uk/ftpdata/tavegl.dat maintained by P. Jones.

The six series are plotted in Fig. 1 along with fitted trend lines obtained from the OLS estimates of Eq. (2.1). OLS estimates of *β*_{2} obtained from Eqs. (2.1) and (3.1) along with the *t*_{HAC} and *t* − PS_{T} statistics are reported in Table 2. There are potentially three entries for the *t* − PS_{T} statistic corresponding to *b* values appropriate for tests at the 5%, 2.5%, and 1% levels of significance. If the *t* − PS_{T} statistic is significant at some significance level (e.g., 2.5%), then it is significant at higher significance levels (e.g., 5%) and is not reported for those higher significance levels. Because it is sometimes argued that most of the global warming has occurred in the twentieth century, we also report results using 1900 as the starting date for each series.

Based on Eq. (2.1), point estimates of the trend slopes range from 0.004155 to 0.006497 confirming the conventional wisdom that global temperatures have increased at roughly the rate of 0.005°C yr^{−1}. Point estimates using twentieth-century data are larger in four cases but are smaller for the WH and VGL series. All of the point estimates are statistically greater than zero according to the *t*_{HAC} statistic. This kind of finding has been consistently obtained in past studies. A skeptic might not be fully convinced by these results for *t*_{HAC} given the possibility of overrejection if the errors have strong serial correlation. However, the *t* − PS_{T} test confirms the statistical significance of the trend estimates. In all but one case, the null hypothesis of a nonpositive trend can be rejected at the 5% level using *t* − PS_{T}. In a majority of cases, the null hypothesis can be rejected at the 2.5% level. These results strongly suggest that global temperature series have been increasing over time, and the possibility that this conclusion is being spuriously generated by strong serial correlation or a unit root in the data can be effectively ruled out. Last, we also report the *J*_{T} statistic used in constructing the *t* − PS_{T} statistics. In all but one case, the null hypothesis that the errors have a unit root can be rejected at the 5% significance level using the *J*_{T} test.

## 5. Conclusions

In this note we applied a recently proposed serial correlation–robust trend function test to six annual global temperature series. Unlike more conventional tests, this new test is robust to strong serial correlation or a unit root in the data. Using this test, we find strong evidence to suggest that typical global temperature series spanning back to the mid-1800s have positive trends that are statistically significant. The robustness of the test effectively rules out the possibility that significance is being spuriously generated by strong serial correlation or a unit root in the data. Therefore, our results confirm and strengthen the growing consensus that average global temperatures are indeed systematically on the rise as widely believed. The point estimates of the rate of increase in the trend suggest that temperatures have risen about 0.5°C (1.0°F) 100 yr^{−1}. If the analysis is restricted to twentieth-century data, many of the point estimates are closer to 0.6°C.

The authors thank an editor and two anonymous referees for constructive comments that lead to improvements of the paper.

## REFERENCES

Andrews, D. W. K., 1991: Heteroskedasticity and autocorrelation consistent covariance matrix estimation.

,*Econometrica***59****,**817–858.Bloomfield, P., 1992: Trends in global temperatures.

,*Climatic Change***21****,**275–287.Bloomfield, P., , and D. Nychka, 1992: Climate spectra and detecting climate change.

,*Climatic Change***21****,**1–16.Gordon, A. H., 1991: Global warming as a manifestation of a random walk.

,*J. Climate***4****,**589–597.Grenander, U., , and M. Rosenblatt, 1957:

*Statistical Analysis of Stationary Time Series*. John Wiley and Sons, 300 pp.Jones, P. D., , T. M. L. Wigley, , and K. R. Briffa, 1994: Global and hemispheric temperature anomalies—Land and marine instrumental records.

*Trends '93: A Compendium of Data on Global Change,*T. A. Boden et al., Eds., Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, 603–608.Jones, P. D., , M. New, , D. E. Parker, , S. Martin, , and I. G. Rigor, 1999: Surface air temperature and its changes over the past 150 years.

,*Rev. Geophys***37****,**173–199.Park, J. Y., 1990: Testing for unit roots and cointegration by variable addition.

*Advances in Econometrics: Cointegration, Spurious Regressions and Unit Roots,*T. Fomby and G. Rhodes, Eds., JAI Press, 107–134.Park, J. Y., , and B. Choi, 1988: A new approach to testing for a unit root. Center for Analytic Economics Working Paper #88-23, 40 pp. [Available from Department of Economics, Uris Hall, Cornell University, Ithaca, NY 14853-7601.].

Parker, D. E., , C. K. Folland, , and M. Jackson, 1995: Marine surface temperature: Observed variations and data requirements.

,*Climatic Change***31****,**559–600.Priestley, M. B., 1981:

*Spectral Analysis and Time Series*. Academic Press, 890 pp.Vinnikov, K. Y., , P. Y. Groisman, , and K. M. Lugina, 1994: Global and hemispheric temperature anomalies from instrumental surface air temperature records.

*Trends '93: A Compendium of Data on Global Change,*T. A. Boden et al., Eds., Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, 615–627.Vogelsang, T. J., 1998: Trend function hypothesis testing in the presence of serial correlation.

,*Econometrica***66****,**123–148.Wilson, H., , and J. Hansen, 1994: Global and hemispheric temperature anomalies from instrumental surface air temperature records.

*Trends '93: A Compendium of Data on Global Change,*T. A. Boden et al., Eds., Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, 609–614.Woodward, W. A., , and H. L. Gray, 1993: Global warming and the problem of testing for trend in time series data.

,*J. Climate***6****,**953–962.Woodward, W. A., , and H. L. Gray, 1995: Selecting a model for detecting the presence of a trend.

,*J. Climate***8****,**1929–1937.Zheng, X., , and R. E. Basher, 1999: Structural time series models and trend detection in global and regional temperature series.

,*J. Climate***12****,**2347–2358.

Empirical null rejection probabilities in finite samples.*

Empirical results for global temperature series.*