Abstract

The southwestern United States has experienced some of the most important increases in nighttime minimum temperatures over the last 60 yr, and climate models are projecting more of the same to the end of the century. As climate, geography, and population density vary considerably over the area, very diverse extreme temperature levels and dynamics are observed. It is shown how nighttime minimum temperatures over the 1950–2009 period exhibit more complex dynamics than daytime maximum temperatures. The author studies nighttime minimum temperature series from 12 locations and presents one model capable of capturing all the features of the data at each location. The time series preprocessing model normalizes seasonal shocks by daily and yearly volatility components before modeling the residual volatility as an exponential generalized autoregressive conditional heteroskedasticity [EGARCH(1, 1)] process with seasonal autoregressive structure to account for the presence of nonlinear and seasonal linear dependence, respectively, in the residual series. An exceedance over high thresholds approach is then used to model the tail of the distribution of scaled residuals from the preprocessing model. The resulting marginal distribution of nighttime minimum temperature at each location is then examined to see how it has changed in mean, scale, and shape, respectively, over the 60-yr period. Changes at the 12 locations vary considerably: many locations have seen considerable change in some or all of the three parameters, while two locations have experienced little or no change.

1. Introduction

Our planet faces a changing climate. Increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level are now well documented (see, e.g., Fig. 1.1 of Pachauri and Reisinger 2007). Particularly disconcerting are increases in nighttime minimum temperatures, which have been more important than increases in daytime maximum temperatures on a global scale. Reviewing the surface air temperature records of the past 150 yr, Jones et al. (1999) show the trends for each 5° × 5° grid box on an annual basis for the 1950–93 period. Minimum temperatures decrease in only a few areas. Jones et al. (1999) also show that the diurnal temperature range has decreased by 0.08°C per decade over the same period. Reviewing larger data sources, these results were later confirmed by Vose et al. (2005), who show that minimum temperature increased about twice as fast as maximum temperature over global land areas since 1950.

Nocturnal global warming has an important impact on ecosystems. Alward et al. (1999) show that the above ground net primary productivity of grassland vegetation is negatively correlated with seasonal minimum temperatures. A reduction in grassland vegetation can make these areas more vulnerable to invasion by exotic species and less tolerant of drought and grazing, threatening livestock production. Nocturnal global warming can also alter plant–insect interactions, prolong the growing season, influence net primary productivity and carbon sequestration, and have dramatic effects on ecosystem-level carbon, nitrogen, and water cycling, and therefore carbon storage (Turnbull et al. 2002). On the one hand, the timing of the nighttime warming is important, as Piao et al. (2008) warn that the ability of northern ecosystems to sequester carbon may be diminished earlier than previously suggested if the rate of autumn warming exceeds the rate of spring warming. On the other, the asymmetry of diurnal warming is a threat to ecosystem processes and vegetation [see, e.g., Xia et al. (2010) and Peng et al. (2013), respectively]. Increased nighttime temperatures have a direct impact on crops and have lead to decreased rice yields (Peng et al. 2004). Research indicates that increasing nighttime temperature by 1°C may reduce rice yields by about 10% (IRRI 2013). Since the director general of the International Rice Research Institute (IRRI) recently indicated (Oryza 2013) that rice yield must grow 2% annually to meet rising global demand,1 increasing nighttime temperatures are a serious threat to the survival of a very large number of individuals. The above is surely a nonexhaustive list of repercussions, but it suffices to show the importance of changes to nighttime minimum temperatures.

In the United States, increases in nighttime minimum temperatures are strongest in the Southwest (Rowe and Derry 2012). Figures 3 and 5 of Morak, Hegerl and Christidis (2013) show that the southwestern United States has seen increases in nighttime minimum temperatures over the 1951–2003 period. Global climate models call for more of the same: the top left panel in Fig. 4 of Orlowsky and Seneviratne (2012) shows a large increase in warm nights for the southwestern United States in 2081–2100 compared to 1980–99 patterns. It is thus important to gain a better understanding of observed changes in nighttime minimum temperatures in the southwestern United States over the past years. We have identified 12 locations in the southwestern United States for which nighttime minimum temperature data are available from 1950 to 2009. We find models that are not only suitable for these extreme data, but also provide insights into the nature of any changes to nighttime minimum temperature distributions over the 60-yr period.

Extreme value theory provides approximate probabilistic models for making inferences about extreme values of the underlying process that generated the data. When these processes are stationary, the approaches are well established (see, e.g., Coles 2001). The analysis of nonstationary processes remains difficult and the analysis of extreme temperature events presents many challenges. A complete literature review of contributions in this area is found in Dupuis (2012). The latter argues that the preprocessing approach of Eastoe and Tawn (2009) is the most well suited for the analysis of daily extreme temperatures and proposes a preprocessing model, and additional modeling tools, that lead to an excellent fit for daily maximum temperatures from an eclectic sample of U.S. cities. Dupuis (2012) also hypothesizes that nighttime minimum temperatures could be analyzed in an analogous fashion.

It is already known to meteorologists that nighttime minimum temperatures are more difficult than daytime maximum temperatures: nighttime minimum forecasts are much less successful than daytime maximum forecasts. Exploratory data analysis in section 2 shows how nighttime minimum temperature dynamics are more complex than their daytime maximum counterparts. More elaborate models than those currently proposed in the literature for daytime maxima are required to capture this complexity.

In this paper, we propose a statistical modeling approach to deal with all the features of nighttime minimum temperature data: time trends in location and scale; seasonality in location, scale, and autoregressive behavior; and volatility clustering of temperature shocks and time trends in the distribution of these shocks. The model allows us to estimate marginal location, scale, and shape parameters of the distribution of nighttime minimum temperatures through time, and consequently track any changes. With the estimated model, we can then show how the distribution of the nighttime minimum temperature for December 2009 in Los Angeles has different location, scale, and shape from that of the same month in 1950. That is, not only has the mean of the distribution shifted to warmer temperatures, but the variability has been reduced and, even if the change in the shape of the lower tail of cold temperature shocks favors slightly larger shocks, the probability of very cold nighttime minimum temperatures is now very small. This is not the case for all 12 southwestern U.S. locations, however, as the wildly varying climate, geography, and population density produce very diverse extreme temperature levels and dynamics. Our model fits the data well, adeptly summarizing similarities and differences in nighttime minimum temperatures and their changes over the 60-yr period.

The remainder of the paper is organized as follows. In section 2, the southwestern U.S. data are presented and the added complexity of the nighttime minimum temperatures as compared to daily maximum temperatures is highlighted. In section 3, we present some results in extreme value theory, the details of our model, its estimation, goodness-of-fit assessment, and some multivariate issues. In section 4, the southwestern U.S. data are analyzed. Some discussion and conclusions appear in section 5. Finally, some technical details are given in the appendixes.

2. Southwestern U.S. data

We consider the historical extreme temperatures from 1 January 1950 to 31 December 2009 for the 12 locations that appear in Fig. 1. The data are recorded to the nearest tenth of a degree Fahrenheit and are obtained from the National Oceanic and Atmospheric Administration (NOAA; available at www.nesdis.noaa.gov). These series have very few missing observations, and we simply filled in missing points by linear interpolation. Data for 29 February are removed.

Fig. 1.

The 12 Southwestern U.S. locations with data available for 1950–2009.

Fig. 1.

The 12 Southwestern U.S. locations with data available for 1950–2009.

The southwestern United States2 covers a limited geographical area from a global perspective, but it includes many different climates: semiarid to arid climate with desert lands and mountains, higher elevations that receive large amounts of snow, a coastal area, Mediterranean-like climate with somewhat rainy winters and dry summers, large metropolitan areas subject to urban heat island effects, and sparsely populated regions that are among the coldest places in the contiguous United States.3

Quantile curves for the observed nighttime minimum temperatures for Los Angeles and Winslow are plotted in Fig. 2. While some differences are observed—most notably the larger range and variability of nighttime minimum temperatures in Winslow compared to Los Angeles—these differences are not fundamental and these plots do not look very different from those for other nighttime minimum temperatures (not shown) or those for daily maximum temperatures (not shown). A first notable difference between nighttime minimum and daytime maximum temperatures may be observed in Fig. 3 where lag-1 correlations for each day of the year are evaluated over the 60-yr period. Nighttime minimum temperature lag-1 correlations show more pronounced seasonality and this seasonality must be accounted for when preprocessing the data. A second notable difference between the dynamics of the two daily extreme temperatures may be observed in Fig. 4 where the autocorrelation function (ACF) of squared deseasonalized residuals [see Eqs. (2) and (3) for details on removing the seasonality in location and scale] are plotted for Los Angeles. The much slower decay in the ACF on the left is an indication of a nonlinear dependence in the residuals (i.e., there is serial dependence in the variance of the residuals). In section 3, we present a preprocessing model that accounts for these additional features as well as those previously identified in the literature for daily extreme temperatures.

Fig. 2.

Minimum, 5% quantile, median, 95% quantile, and maximum observed nighttime minimum temperatures (°F) for (left) Los Angeles and (right) Winslow, 1950–2009. The points plotted below and above the curves are the minimum and maximum observed nighttime minimum temperatures respectively.

Fig. 2.

Minimum, 5% quantile, median, 95% quantile, and maximum observed nighttime minimum temperatures (°F) for (left) Los Angeles and (right) Winslow, 1950–2009. The points plotted below and above the curves are the minimum and maximum observed nighttime minimum temperatures respectively.

Fig. 3.

Lag-1 autocorrelation of daily extreme temperatures in (left) Los Angeles and (right) Winslow, 1950–2009, according to day of the year.

Fig. 3.

Lag-1 autocorrelation of daily extreme temperatures in (left) Los Angeles and (right) Winslow, 1950–2009, according to day of the year.

Fig. 4.

ACF for squared deseasonalized daily extreme temperatures residuals in Los Angeles, 1950–2009: (left) nighttime and (right) daytime.

Fig. 4.

ACF for squared deseasonalized daily extreme temperatures residuals in Los Angeles, 1950–2009: (left) nighttime and (right) daytime.

3. Extreme value modeling

a. Stationary processes

We are interested in making statistical inferences about the extreme values of a random process. One possible approach uses excesses over thresholds [see, e.g., chapter 4 of Coles (2001)]. The theoretical underpinnings are from Pickands (1975), and Davison and Smith (1990) present the first data analysis. For a stationary random process, the extreme behavior is modeled by the rate υu = Pr(Y > u) of occurrence of observations which exceed some sufficiently high threshold u, and the size of these exceedances which is assumed to follow a generalized Pareto distribution (GPD)

 
formula

where δu > 0 is a scale parameter, ξ is a shape parameter, and a+ = max(0, a). In practice, parameters are estimated from the data. The maximum likelihood estimate (MLE) of the rate parameter υu is nu/n, where nu is the number of exceedances of the threshold u among the n observations of the process. The MLE of the GPD parameters is found by numerical optimization.

b. Dealing with nonstationary processes

It is clear from Fig. 2 that our nighttime minimum temperatures are a nonstationary process in time t, and thus the threshold model from section 3a cannot be used directly. However, used in conjunction with clever statistical modeling, the threshold model can provide useful inferences. There are essentially three approaches for dealing with nonstationarity: letting the parameters of the GPD be a function of t, specifying a time-varying threshold u(t), or preprocessing the nonstationary process to yield a stationary process of residuals to which the classical tail model can be fitted. The first approach, first put forward by Davison and Smith (1990) and Smith (1989), continues to model the exceedances of a fixed high threshold u, but accounts for the nonstationarity in the exceedances by letting the parameters of the GPD be a function of t: the rate of exceedance becomes υu(t) and the excesses now follow a GPD[δu(t), ξ(t)]. Long-term trends can be captured by linear or quadratic terms in t, and seasonal effects can be captured by harmonic terms. This approach is not workable with nighttime minimum temperature, however: too many covariates must be included to capture the seasonal effects and we quickly run into convergence problems with the MLE. There is also no way to model the autocorrelation in the temperature series. The second approach has often been used pragmatically: limit any one analysis to a given month or season, choose a threshold for these data, and hope that stationarity prevails. There are at least two major problems with this approach: many ad hoc choices and the lack of a global model. The third approach, first put forward by Eastoe and Tawn (2009), preprocesses the data before fitting a model to the residuals. This is the approach followed in Dupuis (2012) for the analysis of daily maximum temperature. For a nonstationary random process {Yt}, we need to specify a model Yt = μt + σtZt where {Zt} are assumed to be approximately stationary. For Zt below some threshold u, where many (several thousand for our data) estimates of Zt provide a good estimate of the distribution of Zt, the empirical distribution can be used for the distribution of Zt. Above the threshold u, where data are sparse, {Zt} are modeled according to the first approach for nonstationary processes outlined in the beginning of this section. We thus have υz,u(t) as the rate of exceedance of uz by Zt, and δz,u(t) and ξz(t) as the scale and shape parameters of the GPD, respectively. First we estimate the location and scale parameters μt and σt, and then we estimate υz,u(t), δz,u(t), and ξz(t). The model for μt and σt is detailed in section 3c, and the models for υz,u(t), δz,u(t), and ξz(t) are described in section 3e.

c. Preprocessing model specification

The southwestern U.S. nighttime minimum temperature data explored in section 2 exhibited many features: seasonality in the mean, seasonality in scale, seasonality in the autoregressive behavior, and serial dependence in the variance of the deseasonalized temperature. Additionally, while not visible from the plots in section 2, there are time trends in location and scale over the 60-yr period. Nighttime minimum temperatures for Winslow are plotted by decade in Fig. 5. Not all locations show the same up/down trend over time, but the level is changing and imposing a linear time trend would be inappropriate. [Figure 12 shows estimated standard deviation by year of deseasonalized nighttime minimum temperatures (model for deseasonalization described later in this section).] While not all locations display the same trends, the standard deviations are not constant and our model should account for that. The slow decay in the ACF in Fig. 4 is an indication of serial dependence in the variance of the residuals. Further investigation reveals that there is an asymmetry in the structure of the volatility at certain locations: positive and negative shocks (i.e., residuals) do not have the same impact on the volatility. Figure 6 shows the estimated volatility (model described later in this section) over an 80-day period. We see how the large positive shock, which corresponds to a nighttime minimum temperature that is much colder than expected, has a larger impact on the volatility than an even much larger (in absolute value) negative shock. We can also see how the larger volatility persists following these large shocks. To accommodate all these features, we let −Yt represent the nighttime minimum temperature on day t and propose the following model:

 
formula

where

 
formula
 
formula
 
formula
 
formula

where I is an indicator function taking the value 1 if the time t is in time block Bk+1 and 0 otherwise, d(t) is a repeating step function that cycles through 1…365, r(t) indicates the year for day t, is the daily (calendar) variance component, is the yearly variance component, t−1 is the information set available at time t − 1, and Zt is an error term. We model the negative of the nighttime minimum temperature since we want to use the more well-known model for maxima (which will be described in section 3e) and wish to refer to excesses over thresholds for the tail of interest [e.g., see section 3.2 of Coles (2001)]. We opt for daily variance components, and not monthly or seasonal components, as the variability can change over very short periods (Fig. 2).

Fig. 5.

Nighttime minimum temperatures (°F) by decades for Winslow, 1950–2009. Box and whiskers plots show the minimum, first quantile, median, third quantile, and maximum observed nighttime minimum temperatures.

Fig. 5.

Nighttime minimum temperatures (°F) by decades for Winslow, 1950–2009. Box and whiskers plots show the minimum, first quantile, median, third quantile, and maximum observed nighttime minimum temperatures.

Fig. 6.

Estimated shock (left) Zt and (right) volatility σt in Winslow from 20 Jul to 7 Oct 2009.

Fig. 6.

Estimated shock (left) Zt and (right) volatility σt in Winslow from 20 Jul to 7 Oct 2009.

A regression on decadal dummy variables and sine and cosine Fourier terms, with intercept, allows for a decadal time trend in the mean and removes the seasonality in the mean, respectively. Daily and yearly variance components are included to allow for seasonality and time trend in variability, respectively. After normalizing seasonal shocks by daily and yearly volatility components, we then model the residual volatility as an exponential generalized autoregressive conditional heteroskedasticity [EGARCH(1, 1)] process (Nelson 1991) with autoregressive (AR) structure to account for the presence of nonlinear and linear dependence respectively, in the residual series. Fourier terms are also included in the autocorrelation coefficients in (5) (see, e.g., Taylor 2006) to allow for the seasonal behavior seen in Fig. 3. The parameter β measures the persistence in the conditional volatility irrespective of any shocks (unexpected warm or cold nights). When β is relatively large, volatility takes a long time to die out following wildly unexpected nighttime minimum temperatures (warm or cold). The parameter α modulates the impact of a shock on the volatility at the next time step. We include a leverage parameter γ (Glosten et al. 1993) to accommodate the asymmetric behavior seen in Fig. 6.

One can observe seasonal volatility in both plots in Fig. 2. This is the case at all locations (plots not shown). One can alternatively model the seasonal volatility by including exogenous Fourier terms in (6) (see, e.g., Dupuis 2012). However, this approach proved ineffective with these nighttime minimum temperatures. The seasonal and yearly volatility had to be removed prior to fitting the AR-EGARCH(1,1) process described in (4)(6) to get an adequate fit. In addition, breaking down the model in these separate components improves interpretation as we will see in section 4.

d. Estimation issues

The preprocessing model defined by (2)(6) is estimated in four steps. In a first step, a multiple linear regression model (with constant variance) is fitted following (3) to detrend and deseasonalize the negative of nighttime minimum temperatures. The value of K is set to 6 such that each Bk is a 10-yr period and the value of H1 is chosen following Bayesian information criterion (BIC) considerations (Schwarz 1978). The first step thus yields estimates for a, hk, k = 1, …, 5, H1, and ai, bi for i = 1, …, H1, and regression residuals. The second step consists of using these regression residuals to obtain estimates of the daily variance , k = 1, …, 365:

 
formula

The above cannot be applied directly to our data, however, as the residuals used to estimate a given need to be stationary (see  appendix A) and daily residuals are not stationary over the 60-yr period examined in section 4. We take a pragmatic approach and compute a weekly-decadal variance; that is, we assume a constant variance over a 10-yr period, applicable to all days in a one-week period. We then use the corresponding 70 residuals to compute the latter variance. These calculations are straightforward. In a third step, regression residuals adjusted by the daily standard deviation computed in the second step are used to obtain estimates of the yearly variance , l = min r(t), …, max r(t):

 
formula

Again, calculations in this third step are easy. Finally, the AR(R) model [(4) and (5)] with the EGARCH(1,1) error model [(6)] allowing for leverage effects is fitted to using the garch function in the S+Finmetrics module of S-Plus, and the values of R, H2,1, …, H2,R, and the necessity of nonzero α, γ, and β are determined following BIC. BIC values for all autoregressive models with R up to 10 and H2,r up to 3, with the EGARCH(1,1) model and all its submodels, are considered and the model with the smallest BIC value is retained. A quasi maximum likelihood approach is used for the estimation so that we can obtain the properties of the estimator for this fourth step, details are provided in  appendix A. Use of a proven GARCH estimation routine is recommended for this fourth step. Which such a routine, parameter estimates are easily produced. This second BIC-based selection considers the parameters estimated in the first step of the estimation as fixed.

The four-step estimation procedure described above leads to point estimates for the parameters, but standard error estimates produced automatically by the software in the fourth step do not take into account the multistep approach. The properties of the four-step estimator, including the expressions for the proper standard errors for estimates at all steps, are worked out in  appendix A.

e. Extreme value models for residuals

As in Dupuis (2012), we find that the derived series {Zt} from preprocessing is not stationary and that some supplementary modeling is required to fit the usual extreme value models. There are several issues: the crossing rate of residuals over a high threshold varies according to the time of year, and even when looking over a period of the year where the crossing rate is constant, the distribution of the size of the excesses over the high threshold possibly changes over the years. The remaining seasonality in {Zt} is likely due to the fact that underlying driving forces for temperature shocks (e.g., advection or light winds) do not act uniformly over the calendar year. Any changes in the size of the excesses over the years could be the result of climate change. We proceed as in Dupuis (2012) and partition the year into seasons using the changepoint detection method described therein, based on finding constant exceedance rates within a season. Unlike Dupuis (2012), we do not use 1 January as the start of the first season. We let a season wrap around the New Year, possibly reducing the number of seasons. We assume a constant threshold crossing rate within a season and fit a GPD. Call the latter model 0. Then, to accommodate changes in the distribution of the size of the excesses over time, we consider models with possibly time-varying parameters δ and ξ for data from each season. More specifically, consider a season delimited by days τj−1 and τj, and let uz be a high threshold. For {Zt} with τj−1 < d(t) ≤ τj we model (Ztuz,j) | Zt > uz,j ~ GPD[δj(t), ξj(t)] where

  1. model 1 allows for one change in the value of ξ: ξj(t) = ξj0 + ξj1 × I(t > ηj) and δj(t) = δj;

  2. model 2 allows for one change in the value of δ: δj(t) = δj0 + δj1 × I(t > ηj) and ξj(t) = ξj;

  3. model 3 allows for one change in the values of both ξ and δ: δj(t) = δj0 + δj1 × I(t > ηj) and ξj(t) = ξj0 + ξj1 × I(t > ηj); and

  4. model 4 allows a log-linear trend for δ and one change in the value of ξ: δj(t) = exp (γjt) and ξj(t) = ξj0 + ξj1 * I(t > ηj),

and select the model based on likelihood considerations. The parameter ηj is referred to as the change-year. Model 1 allows for a change in the shape parameter after year ηj while keeping the scale parameter constant for all years; model 2 allows for a change in the scale parameter after year ηj while keeping the shape parameter constant for all years; model 3 allows for a change in both shape and scale after a common year ηj; and model 4 allows for a change in the shape parameter after year ηj while letting the scale parameter change smoothly over all years. The analysis of the extreme temperature data in section 4 shows that residuals from the more complex preprocessing in section 3c requires less seasonal and time-dependent modeling than those for preprocessed daily maximum temperatures in Dupuis (2012), most likely the result of a more complete preprocessing. Seasons and GPD parameters were estimated based on the top 2% of the residuals. Our conclusions are only mildly sensitive to threshold selection as long as a sufficiently high threshold is chosen.

f. Multivariate issues

The models in sections 3c and 3e are fitted to data from each location. We have data at 12 different locations over a large geographical area, but some locations are less than 200 km apart. While nighttime minimum temperature Yt levels could exhibit some spatial correlation, our interest is not in the correlation among the Yt levels, which is somewhat assured by geographical similarities and proximity, but rather in the dependence among abnormal shocks (i.e., seasonally adjusted differences at the locations). While we do not attempt to build a multivariate model, examining the joint behavior of temperature shocks [i.e., the Zt in (2)(6)] provides additional insight into our data.

Let (W1, W2) have joint distribution F and marginal distributions F1 and F2, respectively. Spearman’s rank correlation is defined by ρS = ρ[F1(w1), F2(w2)] where ρ is the usual Pearson correlation. Spearman’s ρS is invariant to strictly increasing transformations and a good measure of overall dependence between temperature shocks from two locations.

When analyzing extreme values, we are more interested in the amount of tail dependence, that is, dependence in the upper quadrant tail or lower quadrant tail of the bivariate distribution. For example, when there is an abnormally high positive shock at one location, is it quite likely to have an abnormally high positive shock at a neighboring location? We look at

 
formula

If χ = 0, then the variables W1 and W2 are said to be asymptotically independent. If χ > 0, then the variables are said to be asymptotically dependent and χ provides a measure of the strength of dependence. To get an estimate of χ, we can look at

 
formula

for 0 < u < 1, where χ = limu→1 χ(u). We obtain empirical estimates of χ(u) by replacing probabilities with observed proportions [see, e.g., section 8.4 of Coles (2001)]. We check for asymptotic dependence both in the upper quadrant tail, using the definition above, and in the lower quadrant tail, using an analogous definition. In the presence of asymptotic dependence, the latter would need to be accounted for in any multivariate model used to compute the probability that abnormally cold (or warm) nighttime minimum temperatures are observed simultaneously at several locations.

g. Goodness of fit

Estimated marginal densities of nighttime minimum temperatures resulting from the models in sections 3c and 3e can only be obtained through Monte Carlo simulation and the latter also yields confidence interval estimates for any return level estimates, for example. We produce a quantile–quantile (QQ) plot for marginal return levels to assess the fit. The goodness of fit of the conditional distribution can be assessed qualitatively using a Kolmogorov–Smirnov (K-S) statistic and the details are given in  appendix C.

4. Analysis of southwestern U.S. data

The models from sections 3c and 3e are fitted to the data from the 12 southwestern U.S. locations following the estimation procedures detailed in sections 3d and 3e. Some pertinent estimated values concerning model 3 are shown in Table 1. The model for the seasonal trend function S(t) is capturing 58%–88% of the variability in the nighttime minimum temperatures. This range represents a considerable spread over the limited geographical area covered by these data and shows the complexity and diversity of the series at these locations. The range of the estimated constant a, which represents a decadal mean for the 1950s, also shows the extent to which the mean level of nighttime minimum temperature varies across the 12 locations. For 7 locations, Fourier terms up to order 4 are sufficient to model the seasonal trend in the mean; others require up to order 6 or 7.

Table 1.

Linear regression model for S(t). Standard errors are given in parentheses.

Linear regression model for S(t). Standard errors are given in parentheses.
Linear regression model for S(t). Standard errors are given in parentheses.

After estimating daily and yearly variance components according to steps 2 and 3, the AR-EGARCH(1,1)4 models with seasonal autoregressive parameters from Eqs. (4)(6) were fitted and parameter estimates from BIC-chosen models appear in Table 2. Very few estimated parameters are not significantly different from 0. The autoregressive behavior is somewhat similar across all regions (e.g., all locations require seasonal components for at least the first lag), but Fourier terms are present in the third lag only in Bakersfield, while Grand Junction and Winslow are the only locations that require third-order Fourier terms in both the first and second lags. The presence of significant autoregressive terms at many lags indicates that normalized daily shocks are serially dependent over many days. This is consistent with the duration of weather systems. The lag 1 coefficients are large, varying from 0.62 to 0.82 over the 12 locations, indicating a strong one-day carry over. The presence of significant Fourier terms (for lag 1 and/or lag 2 autocorrelation) indicates seasonal autocorrelation; that is, the level of autocorrelation is not constant throughout the year as was anticipated following Fig. 3. EGARCH(1,1) models are chosen for all locations except Phoenix. The estimated β parameter varies from 0.79 to 0.97, and Los Angeles shows the largest persistence on the West Coast with β = 0.96; that is, volatility takes longer to die out following very unexpected nighttime minimum temperatures in Los Angeles than at other West Coast locations. Models with a significant leverage parameter γ are chosen for 8 of the 12 locations. The model for Grand Junction has no leverage parameter and the largest estimated persistence. Only Fresno and Phoenix have γ < 0 and both stand out for other reasons: Fresno has the smallest persistence and Phoenix has the only EGARCH(1,0). The estimated leverage parameter for Winslow is 0.28 and the result, a larger increase in volatility following a large positive residual than following a large negative residual, can be seen in Fig. 6. The effect of can also be seen in this figure as the volatility remained high for about three weeks in August after the two large cold shocks in late July.5 San Francisco and San Diego have the largest positive estimated leverage parameters at 0.48 and 0.44, respectively. This means that cold shocks increase nighttime minimum temperature variability as compared to equivalent warm shocks the most in these locations. This is likely due to the ocean to the west and the changes in landform to the east. On the one hand, the ocean has a high thermal inertia and tends to keep the temperature within a rather narrow range, while the very dry inland is subject to significant variations even in a single day. In general, westerly winds, or light winds, dominate. When the winds change directions, advection of more extreme inland temperatures can result. The topography of the region makes it such that this effect, both in magnitude and length of term, is greater during the night. As a change of a few degrees in the wind direction can have a large impact, it is not surprising that all coastal locations do not show exactly the same effect.

Table 2.

AR-EGARCH preprocessing: Parameter and standard error estimates (given in parentheses) for BIC-chosen models. Estimates that are not significantly different from 0 at the 5% level appear in italics.

AR-EGARCH preprocessing: Parameter and standard error estimates (given in parentheses) for BIC-chosen models. Estimates that are not significantly different from 0 at the 5% level appear in italics.
AR-EGARCH preprocessing: Parameter and standard error estimates (given in parentheses) for BIC-chosen models. Estimates that are not significantly different from 0 at the 5% level appear in italics.

Estimated Spearman correlations for calculated residuals Zt at all pairs of locations are shown in Table 3. The largest values are found for locations that are separated by short distances: 0.64 for Bakersfield and Fresno, and 0.63 for Ely and Elko; however, short distances do not necessarily translate to large correlations as we find only 0.33 for Fresno and San Francisco, and 0.34 for Bakersfield and Los Angeles. Empirical estimates of χ(u) for upper and lower tail dependence of residuals are shown in Figs. 7 and 8, respectively. We see that while the Bakersfield–Fresno and Ely–Elko pairs had the largest overall estimated dependence in Table 3, there is no evidence of upper or lower tail dependence as confidence bands fail to include χ = 1 in the top two plots of Figs. 7 and 8, respectively. Upper tail dependence is a possibility for the Ely–Salt Lake City and Phoenix–Tucson pairs, although the estimated χ(u) remains relatively small at all levels. The largest evidence for tail dependence appears in the lower tail of the Ely–Salt Lake City and Phoenix–Tucson pairs. The latter indicates that when nighttime minimum temperatures are unseasonably much too warm in Ely, this is likely to be the case in Salt Lake City as well. A similar tail dependence exists for Phoenix–Tucson. While we do not pursue multivariate modeling here, these results show that quite general and flexible multivariate models would be necessary to capture the range of behavior. We do not try to benefit from any dependence, general or tail, by linking parameters from two locations and performing a joint estimation given the weak level of dependence, the ample data at a given location, and the additional computational cost.

Table 3.

Spearman correlations of estimated residuals Zt from model (2)(6).

Spearman correlations of estimated residuals Zt from model (2)–(6).
Spearman correlations of estimated residuals Zt from model (2)–(6).
Fig. 7.

Empirical estimates of χ(u) for upper-tail dependence of estimated residuals Zt from model (2)(6) for the selected pairs listed above each panel.

Fig. 7.

Empirical estimates of χ(u) for upper-tail dependence of estimated residuals Zt from model (2)(6) for the selected pairs listed above each panel.

Fig. 8.

Empirical estimates of χ(u) for lower-tail dependence of estimated residuals Zt from model (2)(6) for the selected pairs in Fig. 7.

Fig. 8.

Empirical estimates of χ(u) for lower-tail dependence of estimated residuals Zt from model (2)(6) for the selected pairs in Fig. 7.

Estimated parameters for the extreme value models in section 3e fitted to residuals are shown in Table 4. All locations showed seasonal behavior in the residuals and two to four seasons were identified in each location. For half of these seasons, the distribution of residuals did not show significant changes in shape or scale over the 60 yr, indicating that any changes in the tail of the distribution of nighttime minimum temperatures over the period had been captured by the first- and second-order preprocessing. For the remaining 18 seasons, the tail of the residuals got longer in only four cases: winter months in Bakersfield and Los Angeles, early spring in San Diego, and summer in Los Angeles. The scaled residuals show no evidence of serial correlation. We also verify the level of dependence at extreme levels by looking for any evidence of clustering within the extreme values of Zt. Estimated values of the extremal index θ (Leadbetter 1983) following the intervals estimator of Ferro and Segers (2003) are listed in Table 4. In general, there is very weak dependence at the extreme levels (dependence at this extreme level is negligible when θ = 1) with the strongest dependence appearing over the very warm summer months in Phoenix. We nonetheless account for this dependence when simulating from the fitted models.

Table 4.

Extreme value models for residuals: estimated change-points, selected models, change-years, estimated shape parameters, and estimated extremal index. For example, for Ely two seasons were identified: Season 1 runs from the 296th day of the year to the 46th, and season 2 runs from the 47th day of the year to the 295th. Model 0 was adequate to describe residuals in season 1, while model 3 with a change-year in 1985 was necessary to describe residuals in season 2. For the latter, the estimated shape parameter ξ was −0.04 before 1985 and −0.16 from 1985 onward. Estimated extremal index was 0.99 and 1 in seasons 1 and 2, respectively. Standard errors are given in parentheses.

Extreme value models for residuals: estimated change-points, selected models, change-years, estimated shape parameters, and estimated extremal index. For example, for Ely two seasons were identified: Season 1 runs from the 296th day of the year to the 46th, and season 2 runs from the 47th day of the year to the 295th. Model 0 was adequate to describe residuals in season 1, while model 3 with a change-year in 1985 was necessary to describe residuals in season 2. For the latter, the estimated shape parameter ξ was −0.04 before 1985 and −0.16 from 1985 onward. Estimated extremal index was 0.99 and 1 in seasons 1 and 2, respectively. Standard errors are given in parentheses.
Extreme value models for residuals: estimated change-points, selected models, change-years, estimated shape parameters, and estimated extremal index. For example, for Ely two seasons were identified: Season 1 runs from the 296th day of the year to the 46th, and season 2 runs from the 47th day of the year to the 295th. Model 0 was adequate to describe residuals in season 1, while model 3 with a change-year in 1985 was necessary to describe residuals in season 2. For the latter, the estimated shape parameter ξ was −0.04 before 1985 and −0.16 from 1985 onward. Estimated extremal index was 0.99 and 1 in seasons 1 and 2, respectively. Standard errors are given in parentheses.

Figure 9 shows QQ plots of observed and estimated nighttime minimum temperatures. Plots focus on the lower tail, showing also 90% and 95% confidence bounds. The plots indicate an excellent fit for many locations and acceptable fits at every location. Figure 10 shows K-S test statistic values computed as described in  appendix C. The K-S test statistics are generally low (white or pale gray) in the fall and winter months so our model is correctly capturing the dynamics for these cold seasons when nighttime minimum temperatures are at their lowest values. The summer months in coastal San Francisco, Los Angeles, and San Diego, and to a certain extent the month of July in Ely and Elko, are more problematic (dark gray or black). The strongest (relative) indication of lack of fit of the conditional model for these periods/locations is where we have a prolonged pattern of dark gray or black. The lack of fit of the conditional model is not indicative of a lack of fit of the marginal models as QQ plots for May–August for San Francisco in Fig. 11 show. The lack of fit of the conditional model would be problematic if we were to use the model to assess runs of nighttime minimum temperatures (e.g., to compute heat waves defined in terms of consecutive nights with nighttime minimum temperature above a given level), but it does not invalidate the inferences that we make in the remainder of this section.

Fig. 9.

The QQ plots for nighttime minimum temperature using models in sections 3c and 3e and data from 1950 to 2009 for the station named above each panel. Bottom 22% of data are shown. The 90% and 95% confidence bounds, and the 45° line indicating perfect fit, also appear.

Fig. 9.

The QQ plots for nighttime minimum temperature using models in sections 3c and 3e and data from 1950 to 2009 for the station named above each panel. Bottom 22% of data are shown. The 90% and 95% confidence bounds, and the 45° line indicating perfect fit, also appear.

Fig. 10.

Kolmogorov–Smirnov goodness-of-fit statistic by month and year for each location named above each panel for forecasts based on model in sections 3c and 3e and all days in 1970–2009 period. Forecasts are based on a 20-yr moving window.

Fig. 10.

Kolmogorov–Smirnov goodness-of-fit statistic by month and year for each location named above each panel for forecasts based on model in sections 3c and 3e and all days in 1970–2009 period. Forecasts are based on a 20-yr moving window.

Fig. 11.

The QQ plots for nighttime minimum temperature, May–August, for San Francisco, using modesl in sections 3c and 3e, based on 1950–2009 data. Bottom 22% of data are shown. 95% confidence bounds and 45° line indicating perfect fit also appear.

Fig. 11.

The QQ plots for nighttime minimum temperature, May–August, for San Francisco, using modesl in sections 3c and 3e, based on 1950–2009 data. Bottom 22% of data are shown. 95% confidence bounds and 45° line indicating perfect fit also appear.

Estimated yearly standard deviation parameters Ωl are shown in Fig. 12 along with a Friedman (1984) supersmoother. The latter is based on local-linear k-nearest neighbor fits in a variable neighborhood of the year and reveals any trends. We can see that the estimated Ωl parameter decreases over time at many locations. This reduced variability has an impact on the marginal density of nighttime minimum temperature, as do any changes over time in the mean level or distribution of the shocks. To get a better idea of how any/all changes impact the marginal distribution of the nighttime minimum temperature, we examine a specific period at a given location. An interesting example is the month of December in Los Angeles. We can see in Table 4 that for Los Angeles, the month of December falls completely within season 1 and that the tail of the residuals got longer as of 1971: goes from −0.41 to −0.15. A lengthening of the residual tail encourages a decrease in nighttime minimum temperature.6 Table 5 shows the estimated mean trend parameters from model (3) and we see that, for Los Angeles, every decade had an increase in mean nighttime minimum temperature with respect to the 1950–59 period. The plot for Los Angeles in Fig. 12 shows that the yearly standard deviation parameter decreases over time. These changes in mean and variability favor an increase in nighttime minimum temperature. The bottom-right panel of Fig. 13 shows the estimated marginal density for nighttime minimum temperatures for December in Los Angeles according to the model in sections 3c and 3e. The lengthening of the tail of the shocks does not compensate for the increase in the mean trend and decrease in the standard deviation component. We can see that there is considerably less probability in the lower tail for 2009. Observed nighttime minimum temperatures are also shown in Fig. 13 and they corroborate the density estimates.

Fig. 12.

Estimated yearly standard deviation parameter Ωl for the stations named above each panel. The solid line is the Friedman (1984) supersmoother.

Fig. 12.

Estimated yearly standard deviation parameter Ωl for the stations named above each panel. The solid line is the Friedman (1984) supersmoother.

Table 5.

Estimated mean trend parameters in seasonal mean S(t) following Eq. (3). Standard errors are given in parentheses.

Estimated mean trend parameters in seasonal mean S(t) following Eq. (3). Standard errors are given in parentheses.
Estimated mean trend parameters in seasonal mean S(t) following Eq. (3). Standard errors are given in parentheses.
Fig. 13.

Los Angeles observed nighttime minimum temperatures for December (top left) 1950, (top right) 1980, and (bottom left) 2009. (bottom right) Estimated marginal densities for December 1950, 1980, and 2009 following the models in sections 3c and 3e.

Fig. 13.

Los Angeles observed nighttime minimum temperatures for December (top left) 1950, (top right) 1980, and (bottom left) 2009. (bottom right) Estimated marginal densities for December 1950, 1980, and 2009 following the models in sections 3c and 3e.

The case of San Francisco is also very interesting. We can see in Fig. 12 and Table 5 that San Francisco has suffered similar decreases in variability and increases in the mean as Los Angeles. The increases in the mean are even more pronounced than in Los Angeles for the last two decades. Table 4 shows that the residual tail from day 121 to day 260 got shorter as of 1995: goes from −0.13 to −0.49. Figure 14 shows the estimated marginal density for nighttime minimum temperatures for the months of May through August. Nighttime minimum temperatures for these months have suffered the same fate: considerable warming.

Fig. 14.

San Francisco estimated marginal densities for May, June, July, and August of 1950, 1980, and 2009 following the models in sections 3c and 3e.

Fig. 14.

San Francisco estimated marginal densities for May, June, July, and August of 1950, 1980, and 2009 following the models in sections 3c and 3e.

Not all locations have experienced an increase in nighttime minimum temperatures over the 60-yr period, however, and we look more closely at sparsely populated Winslow (9409 people sharing 12.3 square miles; U.S. Census Bureau 2013). Estimates in Table 5 show decreases in mean levels in four of the five decades with regard to the 1950–59 period. Standard deviation estimates in Fig. 12 actually show an upward trend over the last two decades. Shape parameter estimates in Table 4 show only a mild shortening of the residual tail in the late winter–spring period and no change for the rest of the year over the 60-yr period. Figure 15 shows the estimated marginal density for nighttime minimum temperatures for the months of September through December. We can see that the distribution of nighttime minimum temperatures has changed very little for these four months in Winslow and that the increased peakedness for December 2009 actually favors cooler nighttime minimum temperatures.

Fig. 15.

As in Fig. 14, but for Winslow.

Fig. 15.

As in Fig. 14, but for Winslow.

Table 6 shows the estimated mean, standard deviation, and skewness of the marginal density for the months of March, June, September, and December in 1950, 1980, and 2009 for all 12 locations using the models in sections 3c and 3e. Many interesting patterns may be observed and we highlight but a few. First, the reductions in marginal standard deviation over the years in Los Angeles is the same for the months of March, June, and September as that observed for December in Fig. 13, although December has seen the greatest reductions. Second, the changes in March, September, and December in San Francisco are similar in nature to those observed in Fig. 14. Third, the lack of change in Grand Junction is similar to that seen for Winslow in Fig. 15, except for reductions in skewness for December that only occur in Grand Junction. Finally, the largest increases in the marginal means are seen in Phoenix, where there have also been considerable reductions in standard deviation for December as in Los Angeles (and Salt Lake City). Fresno has also seen large increases in marginal means in all months, but with very little change in the standard deviations or skewness.

Table 6.

For each location, estimated mean (top), standard deviation (middle), and skewness (bottom) of the marginal density for the nighttime minimum temperature for 1950 (left), 1980 (center), and 2009 (right), respectively, for March, June, September, and December following models in sections 3c and 3e. Means and standard deviations are in degrees Fahrenheit.

For each location, estimated mean (top), standard deviation (middle), and skewness (bottom) of the marginal density for the nighttime minimum temperature for 1950 (left), 1980 (center), and 2009 (right), respectively, for March, June, September, and December following models in sections 3c and 3e. Means and standard deviations are in degrees Fahrenheit.
For each location, estimated mean (top), standard deviation (middle), and skewness (bottom) of the marginal density for the nighttime minimum temperature for 1950 (left), 1980 (center), and 2009 (right), respectively, for March, June, September, and December following models in sections 3c and 3e. Means and standard deviations are in degrees Fahrenheit.

5. Discussion

It is well known that compared to nonurban areas urban heat islands raise nighttime temperatures more than daytime temperatures (Houghton et al. 2001). The impact of urban heat islands on the nighttime minimum temperatures in the southwestern United States is undeniable and our model and analysis allow for a proper quantification of the effect.

Many authors (e.g., Alexander et al. 2006; Donat and Alexander 2012; Simolo et al. 2011) have attempted to show how changes to the temperature distribution have changed the probability of extreme temperatures. Donat and Alexander (2012) use a global dataset of daily maximum and minimum temperature anomalies to show if and/or how the distribution of temperature has shifted over the past 60 yr. They are trying to determine whether changes in the extremes are the result of a shifting mean or changes to higher moments of the temperature distribution. Simolo et al. (2011) have similar objectives. Our goal is different: we focus on the distribution of the minimum itself and show how its moments have changed over the years. We also model the data whereas previous authors have taken empirical approaches. Our model aptly summarizes the distribution of nighttime minimum temperature as a function of time, reaffirming past empirical analyses and providing true insight in the nature of any changes.

While convergence to the asymptotic normal for MLE of ξ is very slow and profile likelihood intervals should be computed to reflect the true uncertainty in the estimated ξ that appear in Table 4, very liberal ±2 standard error calculations leave no heavy tails (ξ > 0) and many estimates of ξ are not significantly different from 0. The scaled residuals Zt of a GARCH process can have substantially thinner tails than the original Yt; however, as GARCH processes are known to be Pareto like and heavy tailed (Basrak et al. 2002), the conditional nature of the variance is able to induce heavy tails in the marginal distribution even if the shocks are light tailed.

Finding a better dynamics model for summertime nighttime minimum temperatures in San Francisco, Los Angeles, and San Diego is important future research. As nighttime minimum temperatures have seen greater increases than daytime maximum temperatures, and global climate models call for more of the same, it will be important to include nighttime minimum temperatures in a heat wave definition, as for example is done by the National Weather Service to issue an excessive heat watch. The day-to-day dynamics of observed data must be properly modeled and global climate models must be able to reproduce them if the latter are to be used to make predictions about future increases in the length and intensity of heat waves, and not just about increases in the number of warm nights.

We did not set out to find a multivariate space–time model for the southwest U.S. data, but marginal and multivariate analysis herein shows that these data have many features. They represent a challenge that current space time models are far from meeting. See Huser and Davison (2014) for a good review of space time modeling of extreme events.

Acknowledgments

The author acknowledges the support of the Natural Sciences and Engineering Research Council of Canada. The author also wishes to thank Gerard Croteau from the Canadian Meteorological Centre for helpful insights and three anonymous referees for very constructive comments that helped improve a previous version of the manuscript.

APPENDIX A

Properties of the Estimator

To simplify the notation, assume that for all k, for all l, and that daily variances are computed following (7) even though the modifications described in section 3d had to be implemented for stationarity purposes. Vector ψ = (λ, ζ, Ω, θ)T contains K + 2H1 parameters in λ, 365 parameters in ζ, N parameters in Ω, and parameters in θ. We have as many moment conditions as we have unknown parameters. Let there be K + 2H1 moment conditions g1(λ), 365 moment conditions g2(λ, ζ), N moment conditions g3(λ, ζ, Ω), and moment conditions g4(λ, ζ, Ω, θ) comprising the vector

 
formula

We note the corresponding sample sums g1M, g2M, g3M, and g4M giving . We consider the generalized method of moments (GMM) estimator of the parameter vector

 
formula

where W = I since the system is just identified and not overidentified (e.g., dependent variables in a previous step do not appear as explanatory variables in later steps). To solve this system, λ must solve the first set of equations, ζ must solve the second set conditional on the estimated value of λ, Ω must solve the third set conditional on the estimated values of λ and ζ, and finally θ must solve the fourth set conditional on the estimated values of λ, ζ, and Ω. Newey and McFadden (1994; theorem 6.1, p. 2178) show that if and are consistent estimators of the true λ0 and ζ0, respectively, and the subset of gM that corresponds to these first two steps satisfies a number of standard regularity conditions, the resulting GMM estimator is consistent and asymptotically normal:

 
formula

where , Λ = E[g1:2(ψ1:2)g1:2(ψ1:2)′], the 1:2 notation is used to indicate the (λ, ζ) part of g and ψ, and M is used to indicate the relevant sample size (which is not the same for all estimated parameters; see details in section 3d). Applying the two-step estimator results from Newey and McFadden (1994) two more times, we find

 
formula

where and ϒ = E[g(ψ)g(ψ)′]. The matrices and ϒ can be consistently estimated by replacing expectations by sample averages and parameters by their estimates (see Hansen 1982).

Here, the GMM moment conditions are straightforward in steps 2 and 3, while they are taken to be the score functions in steps 1 and 4 where estimates are obtained by maximum likelihood and quasi-maximum likelihood, respectively. More precisely, the sample sums in steps 1 through 4 are

 
formula
 
formula
 
formula
 
formula

For the Newey and McFadden (1994) result to apply, , , , and must be consistent estimators of the true parameter values at each stage. In step 1, least squares estimates are consistent even under heterogeneous errors as long as their variance is finite (see section 9.2 of Greene 2012). Consistency can be obtained in steps 2 and 3 from the well-known result that when sampling from a stationary ergodic distribution, the sample mean is a consistent estimate of the expected value. As mentioned in section 3d, residuals are not stationary over the N years for our temperature data. To ensure that samples are drawn from a stationary ergodic distribution, the sum in (A2) is actually taken over a 10-yr period and decadal-specific estimates are found. Variance estimates based on 10 points are highly variable and we assume that variances of days within the same week are the same, which is quite reasonable from a physical–climatological point of view, to allow us to pool data. Estimates of ζ1, …, ζ365 are found and used in g3,M. It is quite reasonable to assume that daily-scaled residuals in (A3) are stationary within a year and that consistent estimates of Ω1, …, ΩN are obtained. Consistency of follows from Meitz and Saikkonen (2011). The presence of an EGARCH complicates the computation of derivatives considerably and the computation of the matrix is not a trivial exercise. Some details are provided in the following section.

APPENDIX B

Computing Standard Errors

We need to compute the matrices and ϒ = E[g(ψ)g(ψ)′]. These matrices are consistently estimated using (A1)(A4) and parameter estimates. Partial derivatives and quadratic terms involving (A1)(A3) are straightforward. The use of the EGARCH model (6) for the term in (A4) leads to more tedious calculations for terms involving . Some details are provided here.

We have where and St, ϕr,t, and are as in (3), (5), and (6), respectively. We thus have and letting we write

 
formula

Let . Consider and use the notation . We have

 
formula

where is calculated recursively following

 
formula

For ψlψ we then have

 
formula

where is calculated recursively following

 
formula

APPENDIX C

Goodness of Fit Issues

The goodness of fit of the conditional distribution can be assessed through an out-of-sample approach. Let Ft and yt denote the forecast distribution and the outcome, respectively. Using 20 yr of data up to day t, we estimate the forecasting distribution for day t + 1. Following Dawid (1984) and Diebold et al. Tay (1998), we compute the value of the probability integral transform (PIT) pt = Ft(yt) and use it to assess the adequacy of the fit of the conditional distribution. If the outcome is from a true distribution Gt, then pt has a uniform distribution if Ft = Gt since our Ft is continuous. Computing the PIT for each day over 40 yr and 12 locations takes several weeks using one processor, but the task distributes well over several processors. Aggregating the PIT over different periods allows us to check the adequacy of the fit. Space limitations do not allow us to present histograms for different periods at all 12 locations, so we opt to compute a Kolmogorov–Smirnov (K-S) test statistic for every month and every year, and this for every location. We cannot carry out a K-S test as the use of estimated parameters to form the forecasting distribution invalidates the usual sampling distribution of the K-S test. In principle, one can carry out Monte Carlo simulations to compute the correct sampling distribution in the case where estimated parameters are used in the computation of the K-S statistic, but the computational effort required in the case of our many-parameter multipart model is prohibitive. We can interpret the K-S values relatively however: the largest values indicating the poorest fit. Our approach helps us isolate any problematic periods.

REFERENCES

REFERENCES
Alexander
,
L. V.
, and Coauthors
,
2006
: Global observed changes in daily climate extremes of temperature and precipitation. J. Geophys. Res.,111, D05109, doi:.
Alward
,
R. D.
,
J. K.
Detling
, and
D. G.
Milchunas
,
1999
:
Grassland vegetation changes and nocturnal global warming
.
Science
,
283
,
229
231
, doi:.
Basrak
,
B.
,
R. A.
Davis
, and
T.
Mikosch
,
2002
:
Regular variation of GARCH processes
.
Stochastic Process. Appl.
,
99
,
95
115
, doi:.
Coles
,
S.
,
2001
: An Introduction to Statistical Modeling of Extreme Values. Springer-Verlag, 208 pp.
Davison
,
A. C.
, and
R. L.
Smith
,
1990
:
Models for exceedances over high thresholds
.
J. Roy. Stat. Soc.
,
52B
,
393
442
.
Dawid
,
A. P.
,
1984
:
Statistical theory: The prequential approach
.
J. Roy. Stat. Soc.
,
147A
,
278
292
, doi:.
Diebold
,
F. X.
,
T. A.
Gunther
, and
A. S.
Tay
,
1998
:
Evaluating density forecasts with applications to financial risk management
.
Int. Econ. Rev.
,
39
,
863
883
, doi:.
Donat
,
M. G.
, and
L. V.
Alexander
,
2012
:
The shifting probability distribution of global daytime and night-time temperatures
.
Geophys. Res. Lett.
,
39
, L14707, doi:.
Dupuis
,
D. J.
,
2012
:
Modeling waves of extreme temperature: The changing tails of four cities
.
J. Amer. Stat. Assoc.
,
107
,
24
39
, doi:.
Eastoe
,
E. F.
, and
J.
Tawn
,
2009
:
Modelling non-stationary extremes with application to surface level ozone
.
J. Roy. Stat. Soc.
,
58
(
1
),
25
45
, doi:.
Ferro
,
C. A. T.
, and
J.
Segers
,
2003
:
Inference for clusters of extreme values
.
J. Roy. Stat. Soc.
,
65B
,
545
556
, doi:.
Friedman
,
J. H.
,
1984
: A variable span smoother. Tech. Rep. 5, Laboratory for Computational Statistics, Dept. of Statistics, Stanford University, 30 pp.
Glosten
,
L. R.
,
R.
Jagannathan
, and
D. E.
Runkle
,
1993
:
On the relation between the expected value and the volatility of the nominal excess returns on stocks
.
J. Finance
,
48
,
1779
1801
, doi:.
Greene
,
W. H.
,
2012
: Econometric Analysis. 7th ed. Prentice Hall, 1232 pp.
Hansen
,
L. P.
,
1982
:
Large sample properties of generalized method of moments estimators
.
Econometrica
,
50
,
1029
1054
, doi:.
Houghton
,
J. T.
,
Y.
Ding
,
D. J.
Griggs
,
M.
Noguer
,
P. J.
van der Linden
,
X.
Dai
,
K.
Maskell
, and
C. A.
Johnson
, Eds.,
2001
: Climate Change 2001: The Scientific Basis. Cambridge University Press, 881 pp.
Huser
,
R.
, and
A. C.
Davison
,
2014
:
Space–time modelling of extreme events
.
J. Roy. Stat. Soc.
,
76B
,
439
461
, doi:.
IRRI
, cited
2013
: Rice feels the heat. International Rice Research Institute. [Available online at http://irri.org/rice-today/rice-feels-the-heat.]
Jones
,
P. D.
,
M.
New
,
D. E.
Parker
,
S.
Martin
, and
I. G.
Rigor
,
1999
:
Surface air temperature and its changes over the past 150 years
.
Rev. Geophys.
,
37
,
173
199
, doi:.
Leadbetter
,
M. R.
,
1983
:
Extreme and local dependence in stationary sequences
.
Z. Wahrsch. Verw. Gebiete
,
65
,
291
306
, doi:.
Meitz
,
M.
, and
P.
Saikkonen
,
2011
:
Parameter estimation in nonlinear AR-GARCH models
.
Econ. Theory
,
27
,
1236
1278
, doi:.
Morak
,
S.
,
G. C.
Hegerl
, and
N.
Christidis
,
2013
:
Detectable changes in the frequency of temperature extremes
.
J. Climate
,
26
,
1561
1574
, doi:.
NOAA
,
2013a
: Comparative climatic data for the United States through 2012. U.S. Dept. of Commerce, 145 pp. [Available online at http://www1.ncdc.noaa.gov/pub/data/ccd-data/CCD-2012.pdf.]
NOAA
,
2013b
: Regional climate trends and scenarios for the U.S. National Climate Assessment: Part 5. Climate of the Southwest U.S. U.S. Dept. of Commerce, 87 pp. [Available online at http://www.nesdis.noaa.gov/technical_reports/NOAA_NESDIS_Tech_Report_142-5-Climate_of_the_Southwest_U.S.pdf.]
Nelson
,
D. B.
,
1991
:
Conditional heteroskedasticity in asset returns: A new approach
.
Econometrica
,
59
,
347
370
, doi:.
Newey
,
W. K.
, and
D.
McFadden
,
1994
: Large sample estimation and hypothesis testing. Handbook of Econometrics, Vol. 4, D. McFadden and R. Engle, Eds., Elsevier,
2112
2245
.
Orlowsky
,
B.
, and
S. I.
Seneviratne
,
2012
:
Global changes in extreme events: Regional and seasonal dimension
.
Climatic Change
,
110
, 669–696, doi:.
Oryza
, cited
2013
: Rice yield must grow 2% annually to meet rising global demand. Oryza. [Available online at http://oryza.com/content/rice-yield-must-grow-2-annually-meet-rising-global-demand-says-irri-chief.]
Pachauri
,
R. K.
, and
A.
Reisinger
, Eds.,
2007
: Climate Change 2007: Synthesis Report. Cambridge University Press, 104 pp.
Peng
,
S. B.
, and Coauthors
,
2004
:
Rice yields decline with higher night temperature from global warming
.
Proc. Natl. Acad. Sci. USA
,
101
,
9971
9975
, doi:.
Peng
,
S. S.
, and Coauthors
,
2013
:
Asymmetric effects of daytime and night-time warming on Northern Hemisphere vegetation
.
Nature
,
501
, 88–92, doi:.
Piao
,
S.
, and Coauthors
,
2008
:
Net carbon dioxide losses of northern ecosystems in response to autumn warming
.
Nature
,
451
,
49
52
, doi:.
Pickands
,
J.
, III
,
1975
:
Statistical inference using extreme order statistics
.
Ann. Stat.
,
3
,
119
131
, doi:.
Rowe
,
C. M.
, and
L. E.
Derry
,
2012
:
Trends in record-breaking temperatures for the conterminous United States
.
Geophys. Res. Lett.
,
39
,
L16703
, doi:.
Schwarz
,
G.
,
1978
:
Estimating the dimension of a model
.
Ann. Stat.
,
6
,
461
464
, doi:.
Simolo
,
C.
,
M.
Brunetti
,
M.
Maugeri
, and
T.
Nanni
,
2011
:
Evolution of extreme temperatures in a warming climate
.
Geophys. Res. Lett.
,
38
,
L16701
, doi:.
Smith
,
R. L.
,
1989
:
Extreme value analysis of environmental time series: An application to trend detection in ground-level ozone
.
Stat. Sci.
,
4
,
367
377
, doi:.
Taylor
,
J. W.
,
2006
:
Density forecasting for the efficient balancing of generation and consumption of electricity
.
Int. J. Forecast.
,
22
,
707
724
, doi:.
Turnbull
,
M. H.
,
R.
Murthy
, and
K. L.
Griffin
,
2002
:
The relative impacts of daytime and night-time warming on photosynthetic capacity in Populus deltoides
.
Plant Cell Environ.
,
25
,
1729
1737
, doi:.
U.S. Census Bureau
, cited
2013
: World POPClock Projection. [Available online at http://www.census.gov/population/popwnotes.html.]
Vose
,
R. S.
,
D. R.
Easterling
, and
B.
Gleason
,
2005
: Maximum and minimum temperature trends for the globe: An update through 2004. Geophys. Res. Lett.,32, L23822, doi:.
Xia
,
J. Y.
,
S. P.
Chen
, and
S. G.
Wan
,
2010
:
Impacts of day versus night warming on soil microclimate: Results from a semiarid temperate steppe
.
Sci. Total Environ.
,
408
,
2807
2816
, doi:.

Footnotes

1

U.S. Census Bureau (2013) estimates that the world population reached 7 billion in 2012 and projects that the 8 billion marker will be reached in 2025.

2

As defined in NOAA (2013b).

3

Ely, Nevada has an average of 218 days per year with a minimum temperature of 32°F (0°C) or less (NOAA 2013a).

4

All submodels (e.g., with and without leverage and with and without persistence) of the EGARCH(1,1) were considered.

5

Recall that the nighttime minimum temperature is −Yt so that an unseasonably cold observation yields a positive residual.

6

Recall that we are modeling the negative nighttime minimum temperature so that a longer tail for large positive residuals ultimately means colder nighttime minimum temperatures.