1. Introduction
Climate services of different countries provide customers with statistical information about climatic variables (mainly at the surface) that is based on long-term observations at meteorological stations. This statistical information mainly consists of parameters of the statistical distribution of climatic variables. The most important of these parameters are climatic normals, which are considered to be official estimates of the expected values of climatic variables. The importance of normals derives from their use as a major input for an enormous number of critical societal design and planning purposes.
Because of the widespread need for representative normals along with other climate statistics, it is crucial that climate services deliver the best estimates possible. This is universally not the case, however; currently there are either no or suboptimal published estimates of the current climate, that is, the expected values of climatic variables today, at time and space scales relevant to the myriad applications for which they are needed. The reason for this is threefold:
The contemporary climate is changing at a pace rapid enough to already have important impacts. Climate statistics, including normals, are nonstationary. In the case of U.S. climate divisions, there are many instances in which linear trend estimates (discussed later) yield changes in seasonal temperature and precipitation normals over the last 30 yr that are between 1 and 3 standard deviations of the residual variability. Examples are presented in Fig. 1—note in particular the January–March (JFM) temperature trends in the western United States and October–December precipitation trends in the south central United States. The existence of these trends is one of two sources [the other is El Niño–Southern Oscillation (ENSO) variability] of virtually all of the skill inherent in official U.S. seasonal forecasts, because these forecasts are referenced to the official 1971–2000 U.S. normals (Livezey and Timofeyeva 2007, manuscript submitted to Bull. Amer. Meteor. Soc.). In fact, it is impossible to exploit optimally the ENSO signal in empirical seasonal prediction without properly accounting for the time dependence of normals (Higgins et al. 2004).
Current physical climate models cannot credibly replicate the statistics of today’s climate at scales needed for practical applications, because they cannot credibly replicate recent past climates at these resolutions. These models seem to reproduce the time evolution of the global mean annual temperature well but often fall far short for seasonal mean temperatures at subcontinental and smaller spatial scales at which the information can be practically applied (Knutson et al. 2006). The situation is worse for replication of the evolving statistics of the precipitation climate. We consequently are not in a position to develop accurate estimates of current normals and other statistics through generation of multiple modeled realizations of the climate. However, dynamical climate models may facilitate the development and testing of competing empirical approaches (see section 4).
Since the early 1990s, little research and development attention has been devoted to finding improved alternatives to existing (and often misapplied) empirical approaches for estimation and extrapolation of normals, which include linear trend fitting and the so-called optimal climate normal (OCN; Huang et al. 1996; Van den Dool 2006) used in seasonal prediction by the U.S. National Weather Service (NWS) of the National Oceanic and Atmospheric Administration (NOAA).
The consensus expectation of the climate community is that the global climate will continue to change, and therefore the fundamental problem emphasized here will not disappear. In the meantime a great deal of research attention and resources are being devoted worldwide to improvement of global climate models, but it will take many years before these models can be leveraged directly for monitoring current climate at time and space scales practical for applications. In contrast, viable alternatives to current empirical techniques do exist for estimation and extrapolation of time-dependent normals and other climate statistics. Therefore, they should be explored and adopted, including for official use to supplant current practices.
The intent of this paper is to highlight the problem of empirical estimation and extrapolation of time-dependent climate statistics, with a particular emphasis on normals, to raise the problem’s profile and encourage increased attention to it in the applied climate community, and to effect changes in official practices. To meet these goals, we will analyze and compare the expected error of four current approaches (one introduced here for the first time) for estimation and extrapolation, through the use of a statistical time series model appropriate for many meteorological time series.
The three current methods are 30-yr normals that are officially recomputed every 10 yr (e.g., for 1961–90, 1971–2000) in the United States by the NOAA National Climatic Data Center (NCDC) and are traditionally available 2–3 yr later (historically in 1963, 1973, . . . , 2003), the above-mentioned OCN, and least squares linear trend fitting. The fourth approach is a modification of least squares linear fitting to model more closely the observed characteristics of the likely underlying cause of rapidly changing normals—namely, global climate change. In the first two of the four techniques, extrapolations are made by assigning the latest computed value to future normals, but in the latter two they are made by extending the linear trend into the future.
In the presence of strong, dominantly linear trends largely attributable to global climate change (like those characterizing North America in the winter and spring), it is intuitive that each successive approach of the four listed above (if appropriately applied) should outperform those preceding it. The analysis here will provide an objective, quantitative basis for this intuition. Problems associated with least squares linear trend fitting and its misapplication will also be discussed. The results here and a few other basic precepts can constitute a starting point for best practices for normals and trends for working climatologists.
Following the comparative analysis, the paper contains a brief discussion of nonlinear and adaptive trend estimation methods. An overview of recent advances in the treatment of two other important nonstationary components in climate statistics, the diurnal and annual cycles, is included in an appendix. The paper concludes with summary remarks and recommendations.
2. Trend-related errors in estimates of climatic normals
a. Thirty-year normals
The traditional approach to climate normals will be evaluated first. A comprehensive historical analysis of the evolution of the definition of climatic normals can be found in Guttman (1989). The normals, recommended by the World Meteorological Organization (WMO), are 3-decade averages recomputed each 30 yr (for surface variables only). However, NCDC and many other climatic centers voluntarily recompute them each decade. If this practice survives during the next few years, the current 1971–2000 normals will be replaced by 1981–2010 normals as soon as they are computed and released, likely by 2013.
A 30-yr average was long considered an acceptable trade-off between excessive sampling errors from climatic noise for shorter averages and unacceptably large changes in the climatic normal Y(t) over the averaging period for longer averages. A time average will generally approximate a monotonically changing normal that is best near the midpoint of the averaging interval, with error increasing toward the beginning and end of the interval. However, if the change is slow then it will still constitute a good estimate over the entire span, in this case 30 yr. Here we will quantify the way faster-changing climatic normals compromise the acceptability of the 30-yr average trade-off. In section 2b, the same problem will be addressed for other averaging periods updated annually, that is, moving averages, and the results will be applied to assess the OCN method.
There are two major categories of users of the WMO normals. The first category of these users is forecasters, who predict (in some fashion) climate anomalies in the future for time intervals from a few weeks to 1 yr. The predicted climate anomalies must be expressed as anomalies from the official (i.e., past) normals. Because the climate is nonstationary, however, a prediction of the normal is necessary as well and becomes a key part of the forecast and a source of much of its skill (or lack thereof). The other user category needs climatic normals for more distant periods of time (on the order of 10 yr) for planning and design purposes. Consider the case in which all of these consumers use the official normals for the next decade, until new normals can be computed and released.
The error η(N, τ) of WMO normals (N = 30 yr), computed from (4)–(6) for different β and g, is given in Table 1 for τ = 0 and τ = 10 yr. As noted in the introduction, the range of β in Table 1 has been observed for U.S. climate-division seasonal mean temperature and precipitation. Calculations of g for residuals from these estimated trends range from near 0 to greater than 0.5; therefore Table 1 spans real-world scenarios.
Different applications require different accuracy in the trend estimates. In the absence of an econometric approach in which a cost function limits our natural desire to improve the accuracy of information any further, however, we can adopt the minimal requirement that the error should not exceed a traditionally acceptable value that corresponds to standard error δ ≤ 0.5σ. This formal criterion is often used in statistical meteorology (Vinnikov 1970). It corresponds to η ≤ 0.25, which will be referenced throughout subsequent discussions.
Note first in Table 1 that the errors η(g, β, τ) are not noticeably dependent on g, the measure of redness in the residual time series, but rather on trend β and on τ, where τ is the amount of time after the last year of observations used to compute normals. The error in “persisting” WMO normals exceeds the acceptable limit for b ≥ 0.3σ (10 yr)−1 for almost all τ [and for τ = 10 yr and b ≥ 0.2σ (10 yr)−1]. As soon as b ≥ 0.2σ (10 yr)−1 and τ is close to 10 yr, the WMO normals should not be used for computing climatic anomalies. Except for weak underlying trends, the error is already unacceptable when the 30-yr normal is released (between τ = 2 and 3 yr).
An attempt to solve this problem motivated scientists at NWS’s Climate Prediction Center (CPC) to further develop and implement the OCN. OCN, introduced pragmatically and empirically, has never been explained in sufficiently simple terms but has not been used much outside of CPC. The error associated with OCN estimation and extrapolation will be evaluated next.
b. Optimal climate normals
The first empirical attempts to find the optimal length of the averaging period for hydrological and meteorological data were by Beaumont (1957) and Enger (1959). As a criterion, they used the variance of the difference between N-yr averages and values of climatic variables 1 yr ahead. Later, Lamb and Changnon (1981) estimated the “best” temperature normals for Illinois observed temperature and precipitation using as a criterion the mean absolute value of the same differences. The CPC criterion (applied to 3-month average surface temperatures and precipitation) is based on the maximum of a correlation-like measure between N-yr averages and values 1 yr ahead over the verification period (Huang et al. 1996). The CPC group showed that their criterion produced practically the same results as those used by Beaumont (1957) and Enger (1959). Simple analysis shows that all of these criteria are based on similar definitions of a measure of error in climatic normals when compared with the time-dependent expected value. In fact, the theory of OCNs can be derived from the same simple model (3)–(5) for the error in climate normals.
Expression (4) for the error in the expected value estimate obtained by averaging observed y(t) for N consecutive years η(N, g, β, τ) is a sum of two components. The first one, ηa(N, g), decreases monotonically with increase in N. This is the expected sampling error from the climatic noise—its decrease with increasing N is what is expected intuitively. The second component, ηb(N, β, τ), increases as N increases if the trend β ≠ 0. It is the expected deviation of the N-yr average from the trend line at the end of the averaging interval and beyond, which must increase with N because the number of years from the midpoint of the interval increases. As a result, the error η(N, τ) has a minimum ηoptimal(N, g, β, τ) at Noptimal(g, β, τ).
For illustration, consider a process with lag-1 correlation g = 0.2 and trend b = 0.05σ yr−1. These parameters could belong to time series of wintertime seasonal mean surface air temperatures for a number of western U.S. climate divisions. Figure 2 shows the dependence on N, the number of years of observations averaged to obtain the estimate of Y(t0), of η(N, g, β, τ) and its components ηa(N, g) and ηb(N, β, τ) for τ = 0. The two components respectively are the sampling error from the climatic noise (decreasing with N) and the error from the diverging trend (increasing with N). In this example, the function has a minimum at N = Noptimal ≈ 11 yr.
Forecasts at CPC and other climate prediction centers do not, in general, exceed 1-yr lead (0 ≤ τ ≤ 1 yr). Estimates of Noptimal(g, β, τ) and ηoptimal(g, β, τ) for τ = 0 and 10 yr and for realistic ranges of g and β, β ≠ 0, are given in Table 2. The estimates for τ = 1, not shown here, are very close to those for τ = 0. Note the following from Table 2:
The optimal period of averaging Noptimal and its associated error ηoptimal depend more on β than on g except for large g; that is, it is dominated by trend rather than weak red noise. Thus, if the climatic trend has a seasonal cycle and geographical pattern, so will the optimal period of averaging.
For trends as large as b = 0.1σ yr−1 the optimal period of averaging Noptimal is very short (from 6–7 yr for τ = 0 to 3 yr for τ = 10 yr) and the error ηoptimal of OCN exceeds the acceptable limit of 0.25 for almost all τ shown. For b = 0.05σ yr−1, τ > 0, and g > 0.2, the error also exceeds 0.25.
The errors related to the climatic trend in the OCN estimates of Y(t0) are systematic, not random. Such errors should be treated differently than random errors.
The WMO-recommended 30-yr averaging (Table 1) is close to the OCN for very weak climatic trends (b = 0.01σ yr−1), and the error is identical within the precision of both tables. Because OCN is updated annually, however, it is the preferred choice even with very weak underlying trend, but not as practiced at CPC (see the paragraph after next). As a consequence, OCN has two advantages over conventional practice: Noptimal adjusted to the situation and immediate updates through the last year.
Thus the WMO technique is a good treatment for very weak climatic trends, and the OCN technique is good for modest to medium trends with the lead τ relatively small, but neither has acceptable error for strong trends and longer leads.
As mentioned earlier, OCN is currently used at CPC for short-term climate prediction, τ ≤ 1 yr, using empirically, not theoretically, estimated optimal averaging time intervals (for τ = 1 yr) fixed at 15 yr for monthly precipitation and 10 yr for monthly temperatures (Huang et al. 1996; Van den Dool 2006). From Table 2 these averaging periods correspond approximately to those for short-lead cases with b = 0.03σ yr−1 and b = 0.05σ yr−1, respectively. As a consequence, the entries in Table 2 are underestimates of the errors of CPC/OCN when underlying trends in precipitation and temperature differ much from these values. More specific, for τ = 0, CPC/OCN will have larger errors than those in Table 2 for all cases except b = 0.05σ yr−1 and g = 0.1 for temperature and b = 0.03σ yr−1 and g = 0.2 for precipitation. Fixed N is more convenient but is inadvisable unless Noptimal varies little across a user’s applications.
The OCN technique is an attempt to account for the effects of a climatic trend without defining and estimating the trend itself. Consideration will be given next to the use of observed data to estimate climatic trends and to utilize the estimated dependence of expected value on time. Such an approach should work better than the OCN for very strong trends.
3. Time-dependent climatic normals
a. Least squares linear trend
The values of η(N, g = 0.2, τ = 0), the error in expected value Y(t0) at the end of time interval N yr [used to estimate the trend in Y(t)], are displayed in Fig. 3 (the solid line). Dotted and dashed lines show separately the averaging and the trend-related components of error variance. The first of them (dotted line) is the same as in Fig. 2. It decreases with an increase of N. However, the trend-related error (dashed line) also decreases with an increase of N, because the error in estimating the slope must decrease as the length of the fitted series with the underlying trend increases. Furthermore, unlike before, the trend-related error does not depend on the trend, and as a consequence the total error η is random with no systematic component. We can conclude that the empirically estimated climatic trend Y(t) = a + bt provides sufficiently accurate unbiased estimates of expected value of Y(t0) for records as short as ∼30 yr in the case of g = 0.2.
Climatic normals, estimated from observations over a limited time interval, should be useful for predictions beyond the boundaries of this time interval. Given estimated parameters of a linear trend in expected value Y(t) = a + bt, we can use the same a and b to find Y(t0 + τ), where t0 is the end of the fitting period N and t = t0 + τ is some time in the future. Errors in extrapolated Y(t0 + τ) increase with increasing τ. Theoretical estimates of the error η(N, τ) for different N, τ, and g are shown in Fig. 4.
For all cases in Fig. 4 with g < 0.5, extrapolation of the linear trend 1 yr into the future estimated from N ≥ 30 has expected error less than the acceptable value of 0.25. For users of climatic information a decade in the future (τ ≈ 10 yr), trends must be estimated from significantly longer (N ≈ 40–50 yr) climatic records for acceptable precision. In actuality, it is highly questionable that these longer trend fits are viable in practice because of the nature of actual trends discussed next.
As a practical matter, virtually all of the current important temperature trends over the United States (many exceed b = 0.05σ yr−1) have occurred over the last 30 yr. As a consequence, the only relevant (to current climate change) parts of Fig. 4 are those with N ≤ 30 yr. Because of the strong dependence on the redness (g) of the residual variability, the results in Fig. 4 preclude accurate multiyear extrapolation except when the 1-yr lag correlation is zero or very small, because N should be constrained to be less than or equal to 30 yr.
It is crucial to account for these considerations in studies focused on the current climate and on modern and future climate changes. In these instances, least squares linear trend fits to the last (prior to 2006) 40–100 or more years of data will generally underestimate recent changes and can distort and misrepresent the pattern of these changes. These problems can be avoided by following some sound practices for linear trend estimation: 1) Linear trends should never be fit to a whole time series or a segment arbitrarily, 2) at a minimum, a plot of the times series should be examined to confirm that the trend is not obviously nonlinear, and 3) to the extent possible, the functional form of the trend should be based on additional considerations.
In this context, note that very large scale trends associated with global climate change are approximately linear over the last 30 yr or so but decidedly not over the last 40–70 or more. This fact is the basis for the modified approach to linear least squares that will be examined next. First, however, the relative performance in estimation and extrapolation of normals between the OCN and linear least squares (given an underlying linear trend) will be summarized.
Table 3 shows error thresholds (as a function of redness) expressed as the maximum lead τ (in years) with acceptable error, for 30-yr linear trend fits and the OCN with b = 0.05σ yr−1 and b = 0.03σ yr−1. The table reflects a main conclusion of the last section: that the OCN has acceptable error for modest to moderate underlying linear trends at medium to short leads, respectively. However, it is also clear from Table 3 that 30-yr least squares linear fits (hinge fits are discussed in the next section) substantially outperform the OCN with b = 0.05σ yr−1 and are competitive (as long as the autocorrelation in the climate noise is very small) at b = 0.03σ yr−1. The OCN’s advantage with b = 0.03σ yr−1 (as reflected in Table 3) in operational CPC practice should be less for every g because of the use of fixed (and suboptimal) averaging periods. Except for very small g, this overestimation of operational OCN τmax will be greater for temperature series than for precipitation because the latter’s averaging period (15 yr) is generally closer to the optimal period (Table 2).
The calculations here suggest that 30-yr linear trends are at least as good for operational purposes for all but very modest trends (b < 0.03σ yr−1), at least for temperature normals (for precipitation normals, OCN’s advantage is lost for only slightly stronger trends). As shown in the next section, a modification to the linear trend fits (based on global climate change considerations) that reduces the trend-related error extends the useable extrapolation range even further.
b. The least squares “hinge”
Very large scale trends (in global, hemispheric, land, ocean, etc., seasonal and mean annual temperatures) associated with global warming are approximately linear since the mid-1970s but decidedly not when viewed over longer periods. In particular, smoothed versions of these series dominantly suggest little change in their normals from around 1940 up to about the mid-1970s (e.g., Solomon et al. 2007).
With the reasonable assumption that the strong trends over North America (and probably elsewhere as well) in the last 30 yr or so are related to global warming, an appropriate trend model to fit to a particular monthly or seasonal mean time series to represent its time-dependent normal is a hingelike shape. This least squares hinge fit is a piecewise continuous function that is flat (i.e., constant) from 1940 through 1975 but slopes upward (or downward as dictated by the data) thereafter: Y(t) = a for 1940 ≤ t ≤ 1975 and Y(t) = a + b(t − 1975) for t ≥ 1975. The choice of 1975 as the hinge point is based on numerous empirical studies and model simulations that all suggest the latest period of modern global warming began in the mid-1970s. The slope b is insensitive to small changes in this choice.
The hinge shape is clearly the behavior of the JFM mean temperature series for the climate division representing western Colorado (Fig. 5), where the observed series and the ordinary least squares hinge fit are both shown. Western Colorado temperature was selected as an example for Fig. 5 because it has little or no ENSO signal, but to first order the hinge dominantly characterizes the behavior of U.S. climate-division monthly and seasonal mean time series with moderate to strong trends, especially for surface temperatures.
The hinge technique was first (and exclusively) used in 1998 and 1999 by CPC to help to estimate and extrapolate normals for the cold-season forecasts for 1998/99 and 1999/2000, respectively—both winters with a strong La Niña. After the winter of 1997/98, the great El Niño winter, it was determined at CPC that the cold bias in the winter forecast for the western United States was entirely a consequence of failing to account for a warming climate. Based on the work of Livezey and Smith (1999a, b), the warming was associated with global climate change.
The hinge fit was subsequently devised not only to estimate and extrapolate the trends, but to assess more accurately the historical impacts of moderate to strong ENSO events on the United States. This signal separation required the reasonable assumption that ENSO and global change were independent to first order. With this assumption, conventional approaches for estimating event frequencies conditioned on the occurrence of El Niño or La Niña (e.g., Montroy et al. 1998; Barnston et al. 1999) were modified to account for the changing climate as well.
The effectiveness of the hinge-fit method for the JFM 2000 U.S. mean temperature forecast is shown in Fig. 6. The three panels in the figure are conditional mean temperature probabilities using a version of conventional methods (often referred to as composites; Barnston et al. 1999; Fig. 6a); conditional probabilities using the hinge for trend fitting and signal separation (Fig. 6b); and the verifying observations (Fig. 6c). The first steps to construct (Fig. 6b) consisted of hinge fits to the JFM time series through 1999, calculation of JFM residuals from the hinge fits for past La Niñas, 1-yr extrapolations of the fitted slopes, and addition of the La Niña residuals to the 1-yr extrapolations to obtain conditional frequency distributions. After some spatial smoothing, these values were then referenced to three equally probable categories based on 1953–97.
Note the large differences between Figs. 6a and 6b and their implications for JFM and the extraordinary similarity between Figs. 6b and 6c, the forecast and observed conditions. The year 1966 was used as the hinge point in these 1999 calculations; use of a more appropriate mid-1970s point would have produced a forecast with even wider coverage of enhanced probabilities of a relatively warm JFM.
It is clear from CPC’s and subsequent experience that composite studies of ENSO impacts that do not attempt to account for important trends are deficient from the outset. There fortunately are seasons/areas of the United States for which recent trends are still weak but the ENSO signature is strong, for example much of the Southeast in the winter (Fig. 1). In these instances the climate analyst can ignore trend to diagnose ENSO-related effects; otherwise trend consideration is a critical first step for useful results, regardless of the methods employed.
Here, to explore hinge-fit expected errors, Monte Carlo simulations are used to assess the reduction in error by using a hinge instead of a straight-line least squares fit. Our expectation is that hinge fits will have smaller overall error, simply because the use of 35 additional years (1940–74) of observations to estimate climate normals in the mid-1970s will constrain the starting value at the beginning of the trend period.
In effect, the hinge approach reduces the usual oversensitivity of least squares linear trend fits to one of the endpoints of the time series. A particularly important example of this problem is the pattern of U.S. winter temperature trends computed from the mid-1970s. The winters of 1976/77 and 1977/78 were unusually warm in the west with record cold in the east. Least squares linear trend fits starting from 1976 or 1977 consequently tend to overestimate warming in the east and underestimate it in the west, leading to maps with far more uniform warming than the pattern in Fig. 1.
Simulated time series 75 yr in length (to represent 1940–2014) were generated by adding random, stationary red noise with standard deviation of 1 and lag-1 autocorrelation g to a constant zero over the first 36 yr (to 1975) and to an upward linear trend with constant slope thereafter. Monte Carlo experiments, each consisting of 2500 simulations, were conducted for β = 0.03 and g ranging from 0.0 to 0.5. Straight lines and hinges were fit with ordinary least squares to each time series with data spanning 1975–2004 and 1940–2004, respectively. Each fit was then extrapolated linearly to 2014, and its difference from the specified value of the underlying hinge was computed. The results should not depend on slope, and this was confirmed by other calculations.
Results in the form of error η for both fits at leads τ = 0, . . . , 10 are displayed in Fig. 7. The error η for the hinge is less than that for the straight-line fit for every point plotted, and its advantage increases with lead and (mostly) the autocorrelation in the residual noise.
Use of generalized least squares for hinge fits should reduce expected errors even further; therefore, these errors were also computed. The gains over the ordinary least squares results in Fig. 7 are small but meaningful, and therefore the generalized least squares results are shown in Table 3. Note that use of the hinge essentially eliminates OCN’s advantage for all but g = 0.5 (rarely observed in U.S. climate-division data for β ≥ 0.03), and even more so when OCN is implemented in a suboptimal fashion with fixed averaging periods. The results here suggest that a preferred approach would consist of the OCN (with variable averaging period) for cases with weak trends and the hinge for cases with moderate to strong trends. Such a strategy would require hinge fits everywhere first for a preliminary diagnosis of the strength of the trend and the redness of the residual climate noise, to guide the choice of final fits and for case-by-case specification of OCN averaging in weak trend situations, respectively.
As a service to the applied climatology community, maps of hinge-based trends for 3-month mean U.S. climate-division surface temperature and precipitation for 3 nonoverlapping periods, which, along with Fig. 1, span the year, are included in appendix A (a more complete set was available at the time of writing online at http://www.cpc.ncep.noaa.gov/trndtext.shtml). The data used in all of the maps and time series shown here and the reasons for their use are also described in appendix A.
c. Other shapes
Error estimates made in the previous four sections are directly applicable in practice only when it is reasonable to assume that changes in normals over the last 30 yr are dominantly linear. The possibility that the shape may be otherwise or unstable is likely the source of some reluctance to adopt a new, albeit simple, approach like the hinge fit to replace the OCN. In fact, a comparison of performances in Table 3 (that are overstated for CPC/OCN) for the stronger trends (β > 0.03) observed commonly for U.S. surface temperatures and precipitation over the last 30 yr suggest that the hinge will produce substantial gains even for trends linear to just first order.
Examples of two U.S. climate divisions (and there are many) for which β well exceeds 0.03 for JFM mean temperature but the climate normal since 1975 is not clearly tracking in a straight line are shown in Fig. 8. In both cases the mean temperatures seem to have leveled off (at much higher levels than pre-1980) over the last 20 yr so that the CPC/OCN gives lower estimates of the 2005 normals than does the hinge. For desert California and the Sierra Nevada (Fig. 8a; β = 0.06) the transition appears gradual from the mid-1970s, but for north central Montana (Fig. 8b; β = 0.04) it looks like it occurred more abruptly in the late 1970s.
The differences in the character of these time series and that for western Colorado (Fig. 5; β = 0.06) may be partially or mostly a consequence of climate noise. Western Colorado does not have much of a winter ENSO signal, but the other two locations do and the respective ENSO impacts are nonlinear (Livezey et al. 1997; Montroy et al. 1998). The possibility that the differences are also the result of real differences in local (or regional) processes also governing recent climate change cannot be discounted, however. In any case, climate models universally predict warming to continue.
Perhaps a better model for time-dependent U.S. seasonal temperature normals is a parabolic hinge, in which the data can dictate a flatter (semicubical parabola) or steeper (cubical parabola) growth after the mid-1970s. Such a model has all the advantages of the hinge—smooth piecewise continuous fits to a stationary climate followed by a changing one, utilizing all the data and allowing straightforward extrapolation—but with the flexibility to accommodate departures from linear growth. On the other hand, it is unclear whether there is a physical basis for this choice. Nevertheless, this and other techniques, including adaptive techniques that can accommodate changes in slopes, need to be explored more thoroughly.
More sophisticated low-pass filters than moving averages (i.e., OCN) are frequently used to smooth climate time series. These approaches are purely statistical and do not explicitly address normals as time-dependent expected values, either through use of collateral observational and dynamic model information or time series models to represent the physical processes. A good discussion of these methods that emphasizes the problem of fitting a climate time series near its current endpoint is by Mann (2004). In that paper, the best representations of the recent behavior of the Northern Hemisphere annual mean temperature are produced with use of different versions of the so-called minimum-roughness boundary constraint.
From the perspective of the discussions here and in section 3b, the resulting trends in Mann (2004) are likely modest overestimates of the rate of recent increases in temperature normals. This is a consequence of cooling trends between approximately 1950 and the mid-1970s in the low-pass filtered series that are dominantly a consequence of the exceptionally cold 1970s in North America (cf. Solomon et al. 2007), which in turn is dominantly a result of an exceptionally cold eastern United States (mentioned earlier). There is little evidence that these downturns in the filtered time series are a consequence of other than “climate” noise. In this context it is also difficult to justify the use of these smoothed series for separating ENSO impacts from those of a changing climate, which is another reason (in addition to overestimation of recent trends) to prefer hinge fits.
To round out a comprehensive overview of estimation and extrapolation of climate normals, the progress in developing techniques for the analytical approximation of seasonal and diurnal dependencies of Y(t) from available observations is summarized in appendix B.
4. Concluding remarks
It is clear from the analysis here that WMO-recommended 30-yr normals, even updated every 10 yr, are no longer generally useful for the design, planning, and decision-making purposes for which they were intended. They not only have little relevance to the future climate, but are more and more often unrepresentative of the current climate. This is a direct result of rapid changes in the global climate over approximately the last 30 yr that most climate scientists agree will continue well into the future. As a consequence, it is crucial that climate services enterprises move quickly to explore and implement new approaches and strategies for estimating and disseminating normals and other climate statistics.
We have demonstrated that simple empirical alternatives already exist that, with one simple condition, can not only consistently produce normals that are reasonably accurate representations of the current climate but also often justify extrapolation of the normals several years into the future. The condition is that recent underlying trends in the climate are approximately linear, or at least have a substantial linear component. We are confident that this condition is generally satisfied for the United States and Canada and for much of the rest of the world but acknowledge that there will be situations for which it is not. In this context, two approaches need to be highlighted:
Optimal climate normals are multiyear averages not fixed at 30 yr like WMO convention but adapted climate record by climate record based on easily estimated characteristics (linear trend and 1-yr residual autocorrelation) of the climate records. The OCN method implemented with flexible averaging periods only begins to fail for very strong underlying trends (between 0.5 and 1 standard deviation of the residual noise per decade) or for longer extrapolations with more moderate background trend (see Tables 2 and 3). Least squares linear trend fits to the period since the mid-1970s are viable alternatives to OCN when it is expected to fail (Fig. 4 and Table 3), but there is an even better alternative.
Hinge-fit normals are based on modeling their time dependence on the known temporal evolution of the large-scale climate and are implemented with generalized least squares. They exploit longer records to stabilize estimates of modern trends in local and regional climates; therefore, they not only outperform straight-line fits (Fig. 7) but even OCN for underlying trends as small as 0.3 standard deviation of the climate noise per decade (Table 3).
Given these results, we make three recommendations:
The WMO and national climate services should formally address a new policy for changing climate normals and other climate statistics, using the results here as a starting point.
NOAA’s Climate Office, NCDC, and CPC should cooperatively initiate an ongoing program to develop and implement improved estimates and forecasts of official U.S. normals.
As a first step, NCDC and CPC should work together to exploit quickly the potential improvements to their respective products demonstrated here. To be specific, the simple hybrid system described in section 3b that combines the advantages of both the OCN and the hinge fit should be implemented in regular operations as soon as possible to produce new experimental products.
As new work on climate normals and their use for forecasts of climate variability and change moves forward, climate analysts need to be cognizant of two points emphasized in sections 3a and 3b:
Linear or other trends should never be fit to a whole time series or a segment arbitrarily; the functional form of the trend should be based on examination of the time series and, to the extent possible, additional considerations.
Any assessment of the historical impacts of ENSO and their use in risk analysis or prediction must take into account climate change and, to the extent possible, separate its effects.
The additional considerations mentioned in the first point immediately above can include results or insight from state-of-the-art climate models. Until now a discussion of the role such models can play in the work and programs we are recommending above has been deferred. There are two potential uses for models that best track the large-scale climate and can replicate at least to first order the variability associated with ENSO and other important modes of interannual variability (i.e., the climate noise). Both uses depend on the fact that the time dependence of climate normals is “known” reasonably well (at least for some parameters, places, and seasons) if the ensemble of model runs is large enough and the runs do not span time scales on which long-term drift associated with, for example, the thermohaline circulation becomes important. In these instances a qualifying model can be used 1) to gain insight about the functional form of regional and subregional trends and 2) as a tool to test competing empirical methods for estimating and projecting these trends. Of course, efforts continue to improve the ability of climate models to replicate the climate comprehensively at smaller spatial and shorter temporal scales. We look forward to when these models can do this credibly and be directly exploited for computing climate normals and other climate statistics.
Acknowledgments
KYV acknowledges support by NOAA through a Climate Program Office grant to CICS.
REFERENCES
Barnston, A. G., A. Leetmaa, V. E. Kousky, R. E. Livezey, E. A. O’Lenic, H. M. Van den Dool, A. J. Wagner, and D. A. Unger, 1999: NCEP forecasts of the El Niño of 1997–98 and its U.S. impacts. Bull. Amer. Meteor. Soc., 80 , 1829–1852.
Barnston, A. G., Y. He, and D. A. Unger, 2000: A forecast product that maximizes utility for state-of-the-art seasonal climate prediction. Bull. Amer. Meteor. Soc., 81 , 1271–1280.
Beaumont, R. T., 1957: A criterion for selection of length of record for moving arithmetic mean for hydrological data. Trans. Amer. Geophys. Union, 38 , 198–200.
Cavalieri, D. J., C. L. Parkinson, and K. Y. Vinnikov, 2003: 30-year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability. Geophys. Res. Lett., 30 .1970, doi:10.1029/2003GL018031.
Enger, I., 1959: Optimum length of record for climatological estimates of temperature. J. Geophys. Res., 64 , 779–787.
Guttman, N. B., 1989: Statistical descriptors of climate. Bull. Amer. Meteor. Soc., 70 , 602–607.
Higgins, R. W., H-K. Kim, and D. Unger, 2004: Long-lead seasonal temperature and precipitation prediction using tropical Pacific SST consolidation forecasts. J. Climate, 17 , 3398–3414.
Huang, J., H. M. Van den Dool, and A. G. Barnston, 1996: Long-lead seasonal temperature prediction using optimal climate normals. J. Climate, 9 , 809–817.
Knutson, T. R., T. L. Delworth, K. W. Dixon, I. M. Held, J. Lu, V. Ramaswamy, and M. D. Schwarzkopf, 2006: Assessment of twentieth-century regional surface temperature trends using the GFDL CM2 coupled models. J. Climate, 19 , 1624–1651.
Lamb, P. J., and S. A. Changnon Jr., 1981: On the “best” temperature and precipitation normals: The Illinois situation. J. Appl. Meteor., 20 , 1383–1390.
Livezey, R. E., and T. M. Smith, 1999a: Covariability of aspects of North American climate with global sea surface temperatures on interannual to interdecadal time scales. J. Climate, 12 , 289–302.
Livezey, R. E., and T. M. Smith, 1999b: Interdecadal variability over North America: Global change and NPO, NAO, and AO? Proc. 23d Annual Climate Diagnostics and Prediction Workshop, Miami, FL, U.S. Department of Commerce, 277–280.
Livezey, R. E., M. Masutani, A. Leetmaa, H. Rui, M. Ji, and A. Kumar, 1997: Teleconnective response of the Pacific–North American region atmosphere to large central equatorial Pacific SST anomalies. J. Climate, 10 , 1787–1820.
Mann, M. E., 2004: On smoothing potentially non-stationary climate time series. Geophys. Res. Lett., 31 .L07214, doi:10.1029/2004GL019569.
Montroy, D. L., M. B. Richman, and P. J. Lamb, 1998: Observed nonlinearities of monthly teleconnections between tropical Pacific sea surface temperature anomalies and central and eastern North American precipitation. J. Climate, 11 , 1812–1835.
Polyak, I. I., 1979: Methods for the Analysis of Random Processes and Fields in Climatology. (in Russian). Gidrometeoizdat, 255 pp.
Polyak, I. I., 1996: Computational Statistics in Climatology. Oxford University Press, 358 pp.
Schneider, J. M., J. D. Garbrecht, and D. A. Unger, 2005: A heuristic method for time disaggregation of seasonal climate forecasts. Wea. Forecasting, 20 , 212–221.
Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, M. Tignor, and H. L. Miller, Eds. 2007: Climate Change 2007: The Physical Science Basis. Cambridge University Press, in press.
Van den Dool, H. M., 2006: Empirical Methods in Short-Term Climate Prediction. Oxford University Press, 240 pp.
Vinnikov, K. Y., 1970: Some problems of radiation station network planning. (in Russian). Meteor. Gidrol., 10 , 90–96.
Vinnikov, K. Y., and A. Robock, 2002: Trends in moments of climatic indices. Geophys. Res. Lett., 29 .1027, doi:10.1029/2001GL014025.
Vinnikov, K. Y., and N. C. Grody, 2003: Global warming trend of mean tropospheric temperature observed by satellites. Science, 302 , 269–272.
Vinnikov, K. Y., A. Robock, and A. Basist, 2002a: Diurnal and seasonal cycles of trends of surface air temperature. J. Geophys. Res., 107 .4641, doi:10.1029/2001JD002007.
Vinnikov, K. Y., A. Robock, D. J. Cavalieri, and C. L. Parkinson, 2002b: Analysis of seasonal cycles in climatic trends with application to satellite observations of sea ice extent. Geophys. Res. Lett., 29 .1310, doi:10.1029/2001GL014481.
Vinnikov, K. Y., A. Robock, N. C. Grody, and A. Basist, 2004: Analysis of diurnal and seasonal cycles and trends in climatic records with arbitrary observation times. Geophys. Res. Lett., 31 .L06205, doi:10.1029/2003GL019196.
Vinnikov, K. Y., N. C. Grody, A. Robock, R. J. Stouffer, P. D. Jones, and M. D. Goldberg, 2006: Observed and model-simulated temperature trends at the surface and troposphere. J. Geophys. Res., 111 .D03106, doi:10.1029/2005JD006392.
APPENDIX A
U.S. Megadivision 3-Month Mean Temperature and Precipitation Trends
Maps of hinge-based trends (section 3b) of 3-month mean temperature and precipitation for 102 U.S. climate megadivisions (formed from the original 344) are shown in Figs. A1 and A2.
Climate-division data are often used at CPC (Barnston et al. 2000; Schneider et al. 2005) instead of station data because of the noise reduction inherent in aggregating nearby stations that strongly covary on intraseasonal to interannual time scales. The original 344 divisions are aggregated to 102 megadivisions mostly through combination of small adjacent divisions in the eastern half of the United States. Western divisions are essentially identical in both datasets. The reduction to 102 was originally done to approximate an equal-area representation for the United States, which is especially desirable for principal component–based studies; however, the additional aggregation provides further noise reduction for the adjacent, strongly covarying eastern divisions. Numerous studies reaffirm that the 102-division setup is more than sufficient to capture the spatial degrees of freedom in the coherent variability of U.S. seasonal mean temperature and precipitation. Megadivision normals are simple arithmetic averages of those for the divisions that compose them.
Data spanning from 1941 (1931) to 2005 with the hinge at 1975 are used to fit the temperature (precipitation) data at each division for each 3-month period. Combined with Fig. 1, Figs. A1 and A2 span the whole year. Based on arguments presented in sections 3a and 3b, we believe the trends displayed here more accurately represent modern U.S. climate change than any previously published.
On each temperature trend map the first color generally does not represent an important trend. The same is true for precipitation except for season/locations that are arid/semiarid. The overall bias for all maps is dominantly warming and significantly toward increasing precipitation. Note for temperature trends (Figs. 1a and A1) that 1) the Southwest has warming trends in every season; 2) west of the high plains the country has significant and consistent warming trends winter through summer (Figs. 1a and A1a, b), 3) trends are dominantly weak and inconsistent east of the high plains in summer (Fig. A1b) and autumn (Fig. A1c), and the Southeast has mostly a weak cooling trend in the spring (Fig. A1a); and 4) the wintertime trend map (Fig. 1a) is remarkable, reflecting almost-continent-wide warming (the exception is Maritime Canada, not shown).
For precipitation trends (Figs. 1b and A2), only the Northwest (autumn/winter; Figs. 1b and A2a,c) and Texas (spring/summer; Figs. A2b,c) have large areas of negative precipitation trends in more than one season and these are mostly small. Note that much of the crop-producing United States outside Texas and some of its surroundings has positive precipitation trends in the growing season (Figs. A2b,c). There is no indication in these results of a trend toward more drought nationwide. Among several area/seasons where trends are upward, the south-central region in the autumn (Fig. 1b) stands out as the most notable.
APPENDIX B
Annual and Diurnal Cycles in Climatic Trends
Different techniques need to be used for variables with seasonal cycles that cannot be approximated properly with a small number of harmonics of the annual cycle. Such techniques can be based, for example, on piecewise least squares approximation of periodic functions A(t), B(t), and so on, by algebraic polynomials in the vicinity of each specific phase of a seasonal cycle.
In addition to the seasonal cycle there is a diurnal cycle in most climatic records, and there can be diurnal cycles in trends as well. In such a case, the generalized coefficient functions A(t), B(t), and so on, in (B1) consist of short-time diurnal variations with a fundamental period of 1 day superimposed on the longer-period annual cycle (Vinnikov and Grody 2003; Vinnikov et al. 2004, 2006). Such processes are well known as amplitude-modulated signals in radio physics.
This approach has been tested using multidecadal time series of hourly observations of surface air temperature at selected meteorological stations (Vinnikov et al. 2004). In addition, application of this new technique to satellite microwave monitoring of mean tropospheric temperatures made it possible to resolve a contradiction between satellite and surface observations of contemporary global warming trends (Vinnikov and Grody 2003; Vinnikov et al. 2006).
A limited number of Fourier harmonics is often also not sufficient to obtain an accurate approximation of the shape of diurnal cycles. As before, other classes of periodic functions can be found or constructed to improve approximations of Y(t). In this instance, estimation of Y(t) can be based on patchwise least squares approximation of periodic functions A(t), B(t), and so on, by two-dimensional algebraic polynomials in the vicinity of each specific phase of seasonal and diurnal cycles.
These techniques can be used also for approximation and evaluation of climatic trends and cycles in variance, lag, and cross correlation and in higher moments of the statistical distribution of climatic variables, in the same way that the least squares technique is used for approximation of trends in expected value. Estimates of Y(t) can be utilized to compute residuals y′(t) for each t. Then, using the same technique for the variables y′(t)2, y′(t)3, y′(t)4, y′(t)y′(t lag), x′(t)y′(t), and so on, we can evaluate trends in variance and other moments of the statistical distribution of the variables y(t) and any other variable x(t). This idea has been recently formulated and applied to study trends in variability of selected climatic variables (Vinnikov and Robock 2002; Vinnikov et al. 2002a). However, no statistically significant trends were found in twentieth-century variability of the large-scale climatic indices that were analyzed.
Studying seasonal (and diurnal) cycles in variances and lag correlations is necessary if we want to use the generalized least squares technique instead of the ordinary one to estimate unknown parameters in (B1). Taking into account the covariance matrix of observed data, the generalized least squares technique provides a more accurate estimate of Y(t) and a much better estimate of its accuracy (Vinnikov et al. 2006).
Theoretical estimates of η(N, g, β, τ), the expected mean-square relative [i.e., δ2(t)/σ2] error of WMO normals at the end of an N = 30 yr period of averaging (τ = 0) and 10 yr later (τ = 10 yr) for different linear trends β = b/σ and lag-1 correlations g in climatic records. Values equal to or greater than 0.25 are shown in boldface.
Optimal climate normals technique: analytical theoretical estimates of Nopt (yr) and ηopt (where opt denotes optimal) for τ = 0 and 10 yr and different lag-1 correlation coefficients g and trends β in climatic records. Values equal to or greater than 0.25 are shown in boldface.
The maximum lead (yr) τmax with acceptable error η ≤ 0.25 for different 1-yr lag autocorrelation g and different projections of an underlying linear-trending normal estimated from climate time series models. Results for the hinge fit (trend period is 30 yr, the same as for the linear fit) are for generalized least squares, which yields small gains over the ordinary least squares results from the Monte Carlo experiment.