Abstract

The issues of downscaling the outputs of a global climate model (GCM) to a scale that is appropriate to hydrological impact studies are investigated using a temporal neural network approach. The time-lagged feed-forward neural network (TLFN) is proposed for downscaling daily total precipitation and daily maximum and minimum temperature series for the Serpent River watershed in northern Quebec (Canada). The downscaling models are developed and validated using large-scale predictor variables derived from the National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis dataset. Atmospheric predictors such as specific humidity, wind velocity, and geopotential height are identified as the most relevant inputs to the downscaling models. The performance of the TLFN downscaling model is also compared to a statistical downscaling model (SDSM). The downscaling results suggest that the TLFN is an efficient method for downscaling both daily precipitation and temperature series. The best downscaling models were then applied to the outputs of the Canadian Global Climate Model (CGCM1), forced with the Intergovernmental Panel on Climate Change (IPCC) IS92a scenario. Changes in average precipitation between the current and the future scenarios predicted by the TLFN are generally found to be smaller than those predicted by the SDSM model. Furthermore, application of the downscaled data for hydrologic impact analysis in the Serpent River resulted in an overall increasing trend in mean annual flow as well as earlier spring peak flow. The results also demonstrate the emphasis that should be given in identifying the appropriate downscaling tools for impact studies by showing how a future climate scenario downscaled with different downscaling methods could result in significantly different hydrologic impact simulation results for the same watershed.

1. Introduction

The mathematical models used to simulate the present climate and project future climate with forcing by greenhouse gases and aerosols are generally referred to as general circulation models or global climate models (GCMs). However, the spatial resolution of GCMs remains quite coarse, on the order of 300 km × 300 km, and, at that scale, the regional and local details of the climate that are influenced by spatial heterogeneities in the regional physiography are lost. GCMs are, therefore, inherently unable to represent local subgrid-scale features and dynamics, such as local topographical features and convective cloud processes (Wigley et al. 1990; Carter et al. 1994). Therefore, GCM simulations of local climate at individual grid points are often poor, especially when the area has complex topography (Schubert 1998). There is no theoretical level of spatial aggregation at which GCMs can be considered skillful, though there is evidence thereof at several grid lengths (Widmann and Bretherton 2000). However, in most climate change impact studies, such as hydrological impacts of climate change, impact models are usually required to simulate subgrid-scale phenomenon and, therefore, require input data (such as precipitation and temperature) at a similar subgrid scale. Therefore, there is the need to convert the GCM outputs into at least a reliable daily rainfall and temperature time series at the scale of the watershed for which the hydrological impact is going to be investigated. The methods used to convert GCM outputs into local meteorological variables required for reliable hydrological modeling are usually referred to as “downscaling” techniques.

There are various downscaling techniques available that convert GCM outputs into daily meteorological variables that are appropriate for hydrologic impact studies. The most widely used statistical downscaling models usually implement linear methods, such as local scaling, multiple linear regression, canonical correlation analysis, or singular value decomposition (Salathe 2003; Schubert and Henderson-Sellers 1997; Conway et al. 1996). However, it is not yet clear which method provides the most reliable estimates of daily rainfall and temperature for the future horizon (e.g., Xu 1999; Schoof and Pryor 2001). Nevertheless, the interest in nonlinear regression methods, namely, artificial neural networks (ANNs), is nowadays increasing because of their high potential for complex, nonlinear, and time-varying input–output mapping. Although the weights of an ANN are similar to nonlinear regression coefficients, the unique structure of the network and the nonlinear transfer function associated with each hidden and output node allows ANNs to approximate highly nonlinear relationships. Moreover, while other regression techniques assume a functional form, ANNs allow the data to define the functional form. Therefore, ANNs are generally believed to be more powerful than the other regression-based downscaling techniques (von Storch et al. 2000). The simplest form of ANN (i.e., multilayer perceptron) is reported to give similar results compared to multiple regression downscaling methods (Schoof and Pryor 2001). Weichert and Burger (1998) reported that the ANN model can account for some heavy rainfall events that were not identified by a linear regression downscaling technique. Cannon and Whitfield (2002) also found that an ensemble ANN downscaling model was capable of predicting changes in streamflows using only large-scale atmospheric conditions as model input. Nevertheless, some studies have also shown that the standard ANN method that is commonly used for hydrologic variables modeling is not well suited to temporal sequences processing, and often yields suboptimal solutions (Coulibaly et al. 2001a). There are, however, other categories of neural networks that have a memory structure to account for temporal relationships in the input–output mappings, and they appear more suitable for complex nonlinear system modeling (Gautam and Holz 2000; Coulibaly et al. 2001b). More recently, Tatli et al. (2004) proposed a Jordan-type recurrent neural network that uses not only large-scale predictors, but also the previous states of the relevant local-scale variables.

The purpose of this study is to identify optimal temporal neural networks that can capture the complex relationship between selected large-scale predictors and locally observed meteorological variables (or predictands). Therefore, the paper aims to highlight the applicability of temporal neural networks as downscaling methods for improving daily precipitation and temperature estimates at a particular location. The downscaling models are developed and validated using large-scale predictor variables derived from the National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis dataset. The paper specifically focuses on the time-lagged feed-forward neural networks (TLFN) that have temporal processing capabilities without resorting to complex and costly training methods. A major assumption in using TLFN is that the local weather is not only conditioned by the present large-scale atmospheric state, but also by the past states. In addition, emphasis is given to evaluating and comparing the optimal TLFN method with the most commonly used multiple regression–based downscaling method, and the best models are applied to downscale the outputs of the Canadian Global Climate Model (CGCM1) forced with Intergovernmental Panel on Climate Change (IPCC) IS92a scenario. The downscaling results are then used to evaluate the hydrologic impact of climate change in the Serpent River flows in northern Quebec, Canada. The remainder of the paper is organized as follows. Section 2 provides an overview of the downscaling methods. Section 3 provides a brief description of the study area and the data used in this study. Section 4 introduces temporal neural networks; and comparative results from the downscaling experiments are reported and discussed in section 5. Section 6 presents the hydrological impact analysis results, and some concluding remarks are made in section 7.

2. Downscaling methods: An overview

Spatial downscaling is the means of relating the large-scale atmospheric predictor variables to local- or station-scale meteorological records that could be used as input to hydrological models. There are a variety of downscaling techniques in the literature, but two major approaches can be identified at the moment, namely, dynamic downscaling and empirical (statistical) downscaling. The dynamic downscaling approach is a method of extracting local-scale information by developing and using limited-area models (LAMs) or regional climate models (RCMs), with the coarse GCM data used as boundary conditions. The basic steps are then to use a GCM to simulate the response of the global circulation to large-scale forcings and a RCM to account for sub-GCM grid-scale forcings, such as complex topographical features and land cover inhomogeneity in a physically based way. RCMs have recently been developed that can attain horizontal resolution on the order of tens of kilometers or less over selected areas of interest (Xu 1999). Compared with GCMs, the resolution of these RCMs is much closer to that of distributed-parameter hydrological models, which even makes coupling of such models possible. However, while RCMs are the most informative downscaling approach, they also have several limitations. RCMs still require considerable computing resources and they are as expensive to run as GCMs (Xu 1999).

Empirical downscaling, on the other hand, starts with the premise that the regional climate is the result of interplay of the overall atmospheric and oceanic circulation as well as of regional topography, land–sea distribution, and land use (von Storch et al. 2000). As such, empirical downscaling seeks to derive the local-scale information from the larger scale through inference from the cross-scale relationship, using some random and/or deterministic functions. In most cases, the regional climate is seen as a random process conditioned upon a driving large-scale climate regime. Therefore, the confidence that may be placed in downscaled climate change information is foremost dependent on the validity of the large-scale fields from GCM. Formally, the concept of regional climate being conditioned by the large-scale state may be written as

 
formula

where R represents the predictand (a regional or local climate variable), L is the predictor (a set of large-scale climate variables), and F is a deterministic/stochastic function conditioned by L and has to be found empirically from observation or modeled datasets. The predictor value L may be taken at the same time as that of the predictand R or at some other time, based on some sort of correlation analysis.

When using downscaling for assessing regional climate change, three implicit assumptions are made (von Storch et al. 2000): 1) the predictors are variables of relevance and are realistically modeled by the GCM; 2) the predictors that are employed fully represent the climate change signal; and 3) the relationship is valid also under altered climate conditions (which may not be provable). A diverse range of empirical downscaling techniques have been developed over the past few years and each method generally lies in one of the three major categories, namely, regression (transfer function) methods (Wilby et al. 2002), stochastic weather generators (Semenov and Barrow 1997), and weather-typing schemes. Individual downscaling schemes differ according to the choice of the mathematical transfer function, predictor variables, or statistical fitting procedure. To date, linear and nonlinear regression, artificial neural networks, canonical correlation, and principal component analysis have all been used to derive predictor–predictand relationships (Conway et al. 1996; Xu 1999).

One of the well-recognized statistical downscaling tools that implements a regression-based method is the Statistical Downscaling Model (SDSM; Wilby et al. 2002). SDSM is used in this study as a benchmark model because it appears to be one of the most widely used models for precipitation and temperature downscaling. SDSM calculates statistical relationships, based on multiple linear regression techniques, between large-scale (the predictors) and local climate variables (the predictand). These relationships are developed using observed weather data and, assuming that these relationships remain valid in the future, they can be used to obtain downscaled local information for some future time period by driving the relationships with GCM-derived predictors. Moreover, different types of data transformations (e.g., logarithms, squares, cubes, fourth powers) can be applied to the standard predictor variables prior to downscaling model calibration to produce nonlinear regression models. Data series can also be shifted forward or backward by any number of time steps to produce lagged predictor variables. SDSM also allows the regression models to be built on a monthly or annual basis. While the main appeal of regression-based downscaling is the relative ease of their application, these models often explain only a fraction of the observed climate variability (especially when the predictand is precipitation). SDSM implements bias correction and variance inflation techniques to reduce the standard error of the estimate and to increase the amount of variance explained by the model(s) to achieve the best possible downscaling performance. However, there is always the risk associated with downscaling future extreme events using regression or neural network–based models that do not explicitly represent the underlying physical mechanisms because these phenomena usually tend to lie at the margins or beyond the range of the calibration dataset (Wilby et al. 2002).

3. Study area and data

The study area selected in this research for the application of downscaling methods and the evaluation of hydrologic impact of climate change is the Serpent River basin located in the Saguenay–Lac Saint-Jean watershed (Fig. 1). The basin has an area of 1760 km2 and is located in the eastern part of the Saguenay watershed. The meteorological station at Chute-des-Passes (located at 49.9°N, 71.25°W) is the closest to the Serpent River basin; therefore, the meteorological data observed at this station is used for downscaling experiments. Forty years of daily total precipitation (liquid and solid) as well as daily maximum and minimum temperature records representing the current climate (i.e., 1961–2000) were prepared for the downscaling experiments. Eleven years of historical flow data (1992–2002) for the Serpent River is also collected from the hydrometric station located at 49.7°N, 71.4°W. At the same time, observed daily data of large-scale predictor variables representing the current climate condition of the region is derived from the NCEP–NCAR reanalysis dataset (Kistler et al. 2001).

Fig. 1.

Location of the study area in northern Quebec (Canada).

Fig. 1.

Location of the study area in northern Quebec (Canada).

Climate variables corresponding to the future climate change scenario for the study area are extracted from the CGCM1. The atmospheric component of the CGCM1 has a surface grid resolution of roughly 3.7° × 3.7° (400 km) and 10 vertical levels, while the ocean component has a resolution of roughly 1.8° × 1.8° (200 km) and 29 vertical levels (Hengeveld 2000). The future climate change considered in this study corresponds to the so-called business-as-usual scenario. Thus, the CGCM1 output used for this study is the result of the IPCC “IS92a” forcing scenario in which the change in greenhouse gases forcing corresponds to that observed from 1900 to 1990 and increases at a rate 1% yr−1 thereafter, until the year 2100. The direct effect of sulfate aerosols is also included. CGCM1 outputs at the closest grid point to the study area (50°N, 71°W) are used as inputs for the downscaling models. The data are divided into four distinct periods, namely, the current (covering the 40-yr period between 1961 and 2000), the 2020s (2010–39), the 2050s (2040–69), and the 2080s (2070–99) to facilitate trend analysis. The possibility of using additional sets of CGCM1 outputs from adjacent grid points was also considered. However, this possibility was not perused further because preliminary experiments show no further improvement in the downscaling results. This is most probably because the Serpent River basin lies entirely under a single CGCM1 grid cell described earlier. The NCEP–NCAR derived predictor data have also been interpolated onto the same grid as that of the CGCM1. All predictors in these datasets (presented in Table 1), with the exception of wind direction, have been normalized with respect to the 1961–90 mean and standard deviation, and were made available by the Canadian Climate Impacts Scenarios (CCIS) project.

Table 1.

Large-scale predictor variables obtained from the CGCM1 outputs.

Large-scale predictor variables obtained from the CGCM1 outputs.
Large-scale predictor variables obtained from the CGCM1 outputs.

4. Temporal neural network method

A neural network can be, in general, characterized by its architecture, which is represented by the network topology and pattern of connections between the nodes, its method of determining the connection weights, and the activation functions that it employs. Multilayer perceptrons (MLPs), which constitute probably the most widely used network architecture, are composed of a hierarchy of processing units organized in a series of two or more mutually exclusive sets of neurons or layers. The information flow in the network is restricted to a flow, layer by layer, from the input to the output; hence, it is also called a feed-forward network. However, in temporal problems, measurements from physical systems are no longer an independent set of input samples, but are functions of time. To exploit the time structure in the inputs, the neural network must have access to this time dimension. While feed-forward neural networks are popular in many application areas, they are not well suited for such temporal sequences processing due to the lack of time delay and/or feedback connections necessary to provide a dynamic model. Within the last decade, the study of ANNs has experienced a huge resurgence due to the development of more sophisticated algorithms and the emergence of powerful computational tools (ASCE Task Committee on Application of Artificial Neural Networks in Hydrology 2000). There are now various types of neural networks that have internal memory structures that can store the past values of input variables through time. There are different ways of introducing “memory” in a neural network in order to develop a temporal neural network. TLFNs and recurrent neural networks (RNNs) are the two major groups of candidate dynamic neural networks that are mostly used in time series analysis (Tatli et al. 2004; Coulibaly et al. 2001a, b; Dibike et al. 1999). However, the latter require complex training algorithms and, hence, are computationally costly. The analysis in this paper concerns temporal neural networks that can be easily trained for practical application.

a. Time-lagged feed-forward networks

A neural network can be formulated by replacing the neurons in the input layer of an MLP with a memory structure, which is sometimes called a tap-delay line. This type of neural network is called TLFN. The size of the memory layer (the tap delay) depends on the number of past samples that are needed to describe the input characteristics in time, and it has to be determined on a case-by-case basis. This facility appears particularly suitable for including lagged predictor variables in the downscaling procedure. A major assumption in the use of TLFN is that the local weather is not only conditioned by the present large-scale atmospheric state, but also by the past states. TLFN uses delay-line processing elements (PEs), which implement memory by delay, that is, by simply holding past samples of the input signal as shown in Fig. 2. The output (y) of such a network with one hidden layer is given by

 
formula

where m is the size of the hidden layer, n is the time step, wj is the weight vector for the connection between the hidden and output layers, wjl is the weight matrix for the connection between the input and hidden layers and ϕ1 and ϕ2 are transfer functions at the output and hidden layers, respectively, and bj and b0 are additional network parameters (often called biases) to be determined during training. For the case of multiple inputs (of size p), the delay line with a memory depth k can be represented by

 
formula

where x(n) represents the input pattern at time step n, xj(n) is an individual input at the nth time step and X(n) is the combined input to the processing elements at time step n. Such a delay line only “remembers” k samples in the past.

Fig. 2.

TLFN with one input, one hidden layer, and a tap-delay line with k + 1 taps (z−1 is an operator that delays the input by one sample).

Fig. 2.

TLFN with one input, one hidden layer, and a tap-delay line with k + 1 taps (z−1 is an operator that delays the input by one sample).

An interesting feature of the TLFN is that the tap-delay line at the input does not have any free parameters; therefore, the network can still be trained with the classical back-propagation algorithm. The TLFN topology has been successfully used in nonlinear system identification, time series prediction (e.g., Coulibaly et al. 2001b), temporal pattern recognition (e.g., Principe et al. 2000), and parallel hybrid modeling (Anctil et al. 2003). A major advantage of the TLFN is that it is less complex than the conventional time delay and recurrent networks and has the similar temporal patterns processing capability (Dibike et al. 1999; Coulibaly et al. 2001b).

b. Network design for the downscaling experiment

The neural network models in this study are developed using the NeuralSolutions v4 (NeuroDimension, Inc., Gainesville, Florida). First, TLFNs with adifferent number of time lags (or time delays) and a variable number of hidden nodes are trained with all (22) of the predictor variables as input to the networks, and the best performing network is selected. Then, the most relevant input variables (predictors) are identified by performing sensitivity analysis on the selected TLFN. Sensitivity analysis provides a measure of the relative importance among the predictors (inputs of the neural network) by calculating how the model output varies in response to variation of an input. The training mechanism of the selected network is disabled during the analysis, such that the weights of the network are not affected. The basic idea of sensitivity analysis is that the inputs to the neural network are shifted slightly and the corresponding change in the output is calculated. Each input is varied between its mean ± n times standard deviation (n is usually between 1 and 3), while all other inputs are fixed at their respective means. The network output is then computed for a specified number of steps above and below the mean. This process is repeated for each input. The relative sensitivity of the model to each input is calculated by dividing the standard deviation of the output by the standard deviation of the input, which was varied to create the output. The results provide a measure of the relative importance of each input (predictor) in the particular input–output transformation. The results of such sensitivity analysis for the downscaling of precipitation at Chute-des-Passes station with TLFN are presented in Fig. 3. The network is then retrained with the few selected (most relevant) predictor variables. Several training experiments are conducted again with different combinations of time lags and number of neurons in the hidden layer till the optimum network that corresponds with the best validation performance is identified. For the case of downscaling of precipitation with TLFN, a time lag of 6 days and 20 neurons in the hidden layer gave the best performing network. In the case of temperature downscaling, TLFN with a time lag of 3 days and 12 neurons in the hidden layer has performed the best; this suggests that the predictand–predictors relationship is less complex in the case of temperature downscaling. Hyperbolic tangent activation function is used at both the hidden and output layers of the TLFNs, and the networks are trained using a variation of back-propagation algorithm (Principe et al. 2000).

Fig. 3.

Sensitivity of TLFN to each of the predictor variables, set as input to the network (measured in terms of std dev of output divided by std dev of input). Definition of variables is the same as in Table 1.

Fig. 3.

Sensitivity of TLFN to each of the predictor variables, set as input to the network (measured in terms of std dev of output divided by std dev of input). Definition of variables is the same as in Table 1.

5. Downscaling results

From the 40 yr of observed data representing the current climate, the first 30 yr (1961–90) are considered for calibrating the downscaling models, while the remaining 10 yr of data (1991–2000) are used to validate those models. The different parameters of each model are adjusted during calibration to get the best statistical agreement between the observed and simulated meteorological variables. During the calibration of precipitation downscaling models, in addition to the mean daily precipitation and daily precipitation variability for each month, monthly average dry- and wet-spell lengths constituted the performance criteria. For the cases of Tmax and Tmin, the mean and variances of these variables corresponding to each month were considered as performance criteria.

For both SDSM and TLFN, selecting the most relevant predictor variables is the first and an important task in the downscaling process. In the case of SDSM, the screening is achieved with linear correlation analysis and scatterplots (between the predictors and the predictand variables). Observed daily data of large-scale predictor variables (NCEP–NCAR data) is used to investigate the percentage of variance explained by each predictand–predictor pair. The influence of individual predictors varies on a month-by-month basis; therefore, the most appropriate combination of predictors has to be chosen by looking at the analysis output of all of the 12 months. However, only one set of selected predictors is used as input to the regression models of all of the months. The attempt to use lagged predictors with SDSM does not bring any noticeable improvement in the downscaling results, suggesting that the relationship between the lagged predictors and the predictand is likely nonlinear. For the TLFN, the predictor variables are selected using sensitivity analysis as described previously (in section 4b). Table 2 presents the predictor variables used in the SDSM and the TLFN models respectively. Even though the set of variables selected to each of the downscaling methods is not identical, some variables such as s500 (specific humidity at 500-hPa height), p__υ (near surface meridional wind velocity component), P850 (850-hPa geopotential height), and temp (mean temperature) are identified as relevant by most of the downscaling methods.

Table 2.

Large-scale predictor variables selected for predicting meteorological variables with different downscaling methods. Definition of variables is the same as in Table 1.

Large-scale predictor variables selected for predicting meteorological variables with different downscaling methods. Definition of variables is the same as in Table 1.
Large-scale predictor variables selected for predicting meteorological variables with different downscaling methods. Definition of variables is the same as in Table 1.

a. Validation results in downscaling NCEP data

To assess the accuracy of the downscaling models, the biases associated with TLFN and SDSM in computing daily precipitation and temperature values for the validation period (1991–2000) are presented along with the daily means of observed values in Table 3. Specifically, the biases shown in Table 3 are the differences between observed and simulated mean daily values, corresponding to each month for the validation period. These results show that while SDSM underestimates the wet-spell length throughout the year, the TLFN overestimates the values most of the time. In general, the results show that both models performed well in downscaling the temperature data. The downscaling model validation statistics are presented in Table 4 in terms of seasonal model biases. The results show the performance of the downscaling models on seasonal basis. These validation results indicate that except for the winter, the TLFN performed better than SDSM model in downscaling daily precipitation for the rest of the seasons. More interestingly, during the autumn season, which is the main rainfall season in the region, the TLFN appears particularly more suitable than the SDSM model for downscaling daily precipitation (see Table 4). Similarly, for the spring season where floods are commonly driven by rainfall on melting snow in the study area, again the TLFN appears more appropriate than SDSM for generating daily precipitation series. This may indicate a good potential of TLFN downscaling for hydrologic impact studies. However, both methods (SDSM and TLFN) demonstrate good and comparable performance in downscaling daily maximum and minimum temperature values.

Table 3.

Mean values of observed predictands and simulation biases associated with the two downscaling models for the validation period (1991–2000).

Mean values of observed predictands and simulation biases associated with the two downscaling models for the validation period (1991–2000).
Mean values of observed predictands and simulation biases associated with the two downscaling models for the validation period (1991–2000).
Table 4.

Comparison of TLFN and SDSM downscaling model validation results in terms of seasonal biases calculated from downscaled and observed predictands [winter: (Dec–Feb), spring: (Mar–May), summer (Jun–Aug), autumn (Sep–Nov)].

Comparison of TLFN and SDSM downscaling model validation results in terms of seasonal biases calculated from downscaled and observed predictands [winter: (Dec–Feb), spring: (Mar–May), summer (Jun–Aug), autumn (Sep–Nov)].
Comparison of TLFN and SDSM downscaling model validation results in terms of seasonal biases calculated from downscaled and observed predictands [winter: (Dec–Feb), spring: (Mar–May), summer (Jun–Aug), autumn (Sep–Nov)].

b. Downscaling GCM outputs corresponding to a future climate scenario

Once the downscaling models have been calibrated and validated, the next step is to use these models to downscale the future climate change scenario simulated by the GCM. In this case, instead of using the NCEP–NCAR reanalysis data as the input to each of the downscaling models earlier, the large-scale predictor variables are taken from CGCM1 simulation output covering the four distinct periods corresponding to the business-as-usual scenario explained in section 3. The monthly statistics of actual observed values and the current and future CGCM1 simulations downscaled with TLFN and SDSM are summarized and plotted in Figs. 4 –6. Figure 4a shows that the monthly mean values of observed precipitation are quite close to that of the TLFN-downscaled data of the current time period (1961–2000) while Fig. 4b reveals that the standard deviation of the downscaled data are slightly lower than those observed. This indicates that the CGCM1 data, downscaled with TLFN, slightly underestimate the variability of the local precipitation. Figures 4a and 4b also show an increase both in the mean daily precipitation and precipitation variability between the current and the future time periods for almost all months of the year. Figures 5a and 5b show similar results for the precipitation scenario data that are downscaled with SDSM. The plots in these figures show that both the monthly mean and standard deviation values of the SDSM-downscaled precipitation data of the current (1961–2000) period are comparable to those of the observed values except for 2 months (April and November). Similarly, Figs. 6a and 6b show that while the monthly means of the TLFN-downscaled temperature data for the current (1961–2000) time period are comparable to that of the observed data, they also show a consistently increasing trend in the downscaled values of both the Tmax and Tmin values. No significant trend is observed when it comes to the variability of monthly Tmax and Tmin values. Table 5 summarizes the downscaling results by presenting the simulated increase or decrease in seasonal values of average precipitation and daily maximum and minimum temperatures between the current (1961–2000) and the 2080 (2070–2100) time periods for each of the downscaling methods. The results show that both SDSM and TLFN predicted a significant increase in precipitation. However, while TLFN predicted a seasonal variation in precipitation increase (with around 16% increase in summer to around 54% in winter), SDSM resulted in a smaller seasonal variation of between 34% in winter and 49% in spring.

Fig. 4.

Observed precipitation and precipitation downscaled with TLFN from the CGCM1 climate change scenario.

Fig. 4.

Observed precipitation and precipitation downscaled with TLFN from the CGCM1 climate change scenario.

Fig. 6.

Observed temperatures and temperatures downscaled with TLFN from the CGCM1 climate change scenario.

Fig. 6.

Observed temperatures and temperatures downscaled with TLFN from the CGCM1 climate change scenario.

Fig. 5.

Observed precipitation and precipitation downscaled with SDSM from the CGCM1 climate change scenario.

Fig. 5.

Observed precipitation and precipitation downscaled with SDSM from the CGCM1 climate change scenario.

Table 5.

Average increase/decrease in seasonal values of meteorological variables between the current (1961–2000) and the 2080 (2070–2100) simulation periods (seasons as defined in Table 4).

Average increase/decrease in seasonal values of meteorological variables between the current (1961–2000) and the 2080 (2070–2100) simulation periods (seasons as defined in Table 4).
Average increase/decrease in seasonal values of meteorological variables between the current (1961–2000) and the 2080 (2070–2100) simulation periods (seasons as defined in Table 4).

In general, while SDSM-downscaled data resulted in an increase of annual precipitation by about 44% by the 2080s, the TLFN-downscaled data resulted in an increase of about 27.6%. At the same time, downscaling result for daily Tmax and Tmin values corresponding to both downscaling models show a comparable and consistently increasing trend. For both the Tmax and Tmin, the highest increase is predicted for the winter season, ranging between 5.5° and 7.1°C, while the lowest increase is predicted for the autumn season, ranging between 2.3° and 3.8°C. Overall, downscaling with TLFN resulted in a slightly higher increase in temperature than SDSM. In general, the results suggest an average increase of 4°–5°C in the mean annual temperature values for the next 100 yr. This typically implies major changes in the hydrologic regime, particularly for a cold and snowy region like the Serpent River basin that is considered here.

6. Hydrologic impact of climate change

Changes in global climate are believed to have significant impacts on local hydrological regimes, such as changes in streamflows, which support aquatic ecosystems, navigation, hydropower, irrigation systems, etc. There may also be a significant change in the frequency and severity of floods and droughts. Such hydrologic impacts of climate change on a watershed can be estimated by developing hydrological models of the watershed and simulating streamflows resulting from the downscaled precipitation and temperature data, corresponding to the climate change scenario considered. In general, the following steps are used in this study to highlight the hydrological impact of climate change on the Serpent River:

  • A hydrologic model of the Serpent river watershed is set up and calibrated (and validated) with observed precipitation, temperature, and streamflow data representing the current climate (1961–2000);

  • Based on the downscaled precipitation and temperature data of the future climate, the calibrated hydrologic model is used to simulate the flow in the Serpent River, corresponding to the climate change scenario considered;

  • The outputs of the hydrologic model, corresponding to the different future time periods (2020s, 2050s, 2080s), are then analyzed to see if there is any indication of significant change in the mean annual discharge and seasonal variability of the Serpent River flow.

To accomplish the above objectives, an integrated hydrological modeling system known as Hydrologiska Byråns Vattenbalansmodell (HBV)-96 is used for the simulation of flow in the Serpent River. HBV-96 has been developed at the Swedish Meteorological and Hydrological Institute, and has been applied to a wide range of applications, such as in the study of the effects of land use and climate changes, and the analysis of extreme floods (Brandt 1990; Harlin and Kung 1992; Liden and Harlin 2000). The model can best be described as a semidistributed conceptual model, and it appears to be particularly suitable for hydrologic impact study in cold and snowy regions. It has a routine for snow accumulation and snowmelt based on a degree-day relation with an altitude correction of temperature. The soil moisture–accounting routine accounts for soil field capacity and the change in soil moisture storage due to rainfall/snowmelt and evapotranspiration, while the runoff generation routine transforms water from the soil moisture zone to runoff. Daily evapotranspiration values are calculated as a function of the daily temperature and monthly seasonal factors distributed over the year. There are more than 30 parameters corresponding to the different processes represented in the model. The most important parameters include the snowmelt factor, threshold temperature, soil moisture storage capacity, and runoff recession coefficient. These parameters need to be adjusted until a satisfactory agreement is achieved between the simulated and observed runoff. These model parameters are determined through a calibration process by trying to find a compromise between the traditional efficiency, R2 by Nash and Sutcliffe (1970), and the relative volume error between the observed and simulated flow. This process leads to results with as high R2 values as possible and practically no volume error. For a more detailed description of the hydrologic model (HBV-96), the readers are referred to Liden and Harlin (2000).

a. Hydrologic model validation results

A conceptual hydrological model of the Serpent River basin is set up with the HBV-96 modeling system. Observed precipitation, temperature, and flow data for the period between 1991 and 1998 are used for calibrating the model, while those during the period of 1999 through 2002 are used for validating the model. The calibration achieved a model efficiency (R2) of 0.85 on the calibration dataset, while it shows an efficiency (R2) of 0.82 on the validation dataset. As shown in Fig. 7, the model simulation has matched most of the observed hydrograph, except that it underestimates the winter low flows for some of the years. Moreover, the total flow volume in the validation period is reproduced very well. Therefore, this model is used to simulate the hydrological impact of climate change in the Serpent River as described in the next section.

Fig. 7.

Observed and HBV-96-simulated hydrographs of the Serpent River flow.

Fig. 7.

Observed and HBV-96-simulated hydrographs of the Serpent River flow.

b. Hydrologic impact analysis

To investigate the impact of climate change on the hydrology of the Serpent River, the downscaled precipitation and temperature data of Chute-des-Passes station are used as input to HBV-96. It is noteworthy that this dataset correspond to the CGCM1 business-as-usual scenario, which is downscaled with the two methods explained in section 5. The calibrated HBV-96 hydrological model is used to simulate the daily river flow rates corresponding to each of the downscaling scenario time periods (current, 2020s, 2050s, and 2080s). Then, the annual mean, the annual minimum, and the annual maximum flow rates in the Serpent River are calculated and averaged over the number of years in each scenario period. The increases in these average flow rates between the current and the three future time periods are summarized in Table 6. These results show that the data downscaled with TLFN and SDSM resulted in an increase in mean annual flow rates between the current and the 2080s time periods of about 20.5% and 39.1%, respectively. The results also show similar increases in the average low flows as well as peak flows. Once again, those results are consistent with the downscaling results discussed in section 5, whereby precipitation downscaled with TLFN and SDSM show an increase in annual precipitation during the same time period. To better understand the variation in the simulated changes over the year, monthly mean flow rates are calculated for each month corresponding to each scenario periods. The changes in monthly mean flow rates between the current and the 2080s time period are then presented in Fig. 8. The figure shows that the CGCM1 data that are downscaled with both TLFN and SDSM resulted in the highest increase in the rate of river flow in May, and the highest decrease in June. This is all consistent with the predicted increase in the temperature, particularly the winter temperature, and the associated earlier snowmelt effect, which may lead to the peak flow season shifting by about a month. This may have important implications for water resource management in the Serpent River watershed. The TLFN-downscaled precipitation and temperature data resulted in a higher increase in the mean streamflow rate in May and a higher decrease in June than that of the SDSM (see Fig. 8). This may be attributed to the relatively higher increase in the average winter temperature simulated by the TLFN than that of SDSM (see Table 5). In general, the anticipated decrease in mean streamflow in June followed by a small increase in autumn, suggests that actual hydropower reservoirs management practices in the study area, will likely be revised and updated to cope with the anticipated seasonal variability of river flows.

Table 6.

HBV-simulated changes in the Serpent River flows corresponding to the CGCM1 business-as-usual climate change scenario, downscaled with the SDSM and TLFN methods.

HBV-simulated changes in the Serpent River flows corresponding to the CGCM1 business-as-usual climate change scenario, downscaled with the SDSM and TLFN methods.
HBV-simulated changes in the Serpent River flows corresponding to the CGCM1 business-as-usual climate change scenario, downscaled with the SDSM and TLFN methods.
Fig. 8.

Comparison of simulated changes in monthly mean flows between the current and the 2080 time periods.

Fig. 8.

Comparison of simulated changes in monthly mean flows between the current and the 2080 time periods.

It is out of the scope of this work to assess the impact of such hydrological changes on the morphology and the aquatic environment of the river, but the level of hydrological changes depicted here cannot occur without consequences on the environment and the organisms (including human) that interact with that river.

7. Conclusions

This study investigates the applicability of temporal neural networks as the downscaling method for the generation of daily precipitation and temperature series at the Chute-des-Passes station, located in the Serpent River watershed in northeastern Canada, and compares the results with that of the most widely used multiple linear regression (SDSM) method. The downscaled temperature and precipitation data are also used to investigate the possible impact of climate change on the hydrology of the Serpent River, which is also located in the same watershed.

The study results show that the time-lagged feed-forward network (TLFN) can be an effective method for downscaling daily precipitation and temperature data as compared to the commonly used statistical method. The main advantage of this downscaling method is its ability to incorporate not only the concurrent, but also several antecedent (or lagged) predictor values as inputs, and its temporal processing ability without any additional computational cost. The downscaling results, corresponding to the business-as-usual climate change scenario, show that while the TLFN model has estimated an increase in average annual precipitation by about 27.6% by the 2080s, the SDSM estimated an increase in annual precipitation by about 44% during the same time period. At the same time, the downscaling result for daily temperature corresponding to both models shows a comparable and consistently increasing trend, with the mean annual temperature increase ranging between 4° and 5°C for the next 100 yr. The results also show seasonal variation in the changes, with the biggest increase in temperature being in winter and the smallest in autumn.

The HBV-96 hydrologic simulation results indicated that the CGCM1 data that were downscaled with both the TLFN and the SDSM models resulted in the highest increase in the Serpent River flow rate in May, and the highest decrease in June, indicating an earlier spring snowmelt and the associated peak flow. Moreover, both downscaling methods resulted in an increase in the rates of low flow during the winter months, consistent with the overall increase in winter temperature and its effect in reducing freezing. The results also show that the data downscaled with TLFN and SDSM resulted in an increase in mean annual flow rate of about 20.5% and 39.1%, respectively. This is a clear indication of how the outcome of a hydrologic impact study (or any other impact study based on downscaled data) can be affected by the choice of any one particular downscaling technique over the other. However, one should also remember that all of the downscaling experiments in this study use the outputs from only one general circulation model (CGCM1). Previous studies showed that data taken from different GCMs could produce significantly different hydrological impact in a given region. Therefore, caution should be exercised in interpreting the outcome of such impact analysis for practical applications.

Acknowledgments

This work was made possible through a grant from the Canadian Climate Change Action Fund (CCAF), Environment Canada, and a grant from the Natural Sciences and Engineering Research Council of Canada. The authors thank the Aluminum Company of Canada (Alcan) for providing the experiment data. HBV-96 has kindly been made available by the Swedish Meteorological and Hydrological Institute. The authors also appreciated very much the valuable comments and suggestions received from three anonymous reviewers.

REFERENCES

REFERENCES
Anctil
,
F.
,
C.
Perrin
, and
V.
Andréassian
,
2003
:
ANN output updating of lumped conceptual rainfall/runoff forecasting models.
J. Amer. Water Resour. Assoc.
,
39
,
1269
1279
.
ASCE Task Committee on Application of Artificial Neural Networks in Hydrology
,
2000
:
Artificial neural networks in hydrology I: Preliminary concepts.
ASCE J. Hydrol. Eng.
,
5
,
115
123
.
Brandt
,
M.
,
1990
:
Simulation of runoff and nitrate transport from mixed basins in Sweden.
Nord. Hydrol.
,
21
,
13
34
.
Cannon
,
A. J.
, and
P. H.
Whitfield
,
2002
:
Downscaling recent streamflow conditions in British Columbia, Canada using ensemble neural network models.
J. Hydrol.
,
259
,
136
151
.
Carter
,
T. R.
,
M. L.
Parry
,
H.
Harasawa
, and
S.
Nishioka
,
1994
:
IPCC technical guidelines for assessing climate change impacts and adaptations. University College and Centre for Global Environmental Research Rep. CGER-1015-94, 59 pp
.
Conway
,
D.
,
R. L.
Wilby
, and
P. D.
Jones
,
1996
:
Precipitation and air flow indices over the British Isles.
Climate Res.
,
7
,
169
183
.
Coulibaly
,
P.
,
F.
Anctil
,
R.
Aravena
, and
B.
Bobée
,
2001a
:
ANN modeling of water table depth fluctuations.
Water Resour. Res.
,
37
,
885
896
.
Coulibaly
,
P.
,
F.
Anctil
, and
B.
Bobée
,
2001b
:
Multivariate reservoir inflow forecasting using temporal neural networks.
J. Hydrol. Eng. ASCE
,
6
,
367
376
.
Dibike
,
Y. B.
,
D.
Solomatine
, and
M. B.
Abbott
,
1999
:
On the encapsulation of numerical-hydraulic models in artificial neural network.
J. Hydraul. Res.
,
37
,
147
161
.
Gautam
,
D. K.
, and
K-P.
Holz
,
2000
:
Neural network based system identification approach for the modelling of water resources and environmental systems.
Artificial Intelligence Methods in Civil Engineering Applications, Proceedings of the Second Joint Workshop on Artificial Intelligence Methods in Civil Engineering Applications, O. Schleider and A. Zijderveld, Eds., 87–100
.
Harlin
,
J.
, and
C-S.
Kung
,
1992
:
Parameter uncertainty and simulation of design floods in Sweden.
J. Hydrol.
,
137
,
209
230
.
Hengeveld
,
H. G.
,
2000
:
Projections for Canada’s climate future: A discussion of recent simulations with the Canadian global climate model.
Climate Change Digest, Vol. CCD00-01, Special Edition, Meteorological Service of Canada, Environment Canada, 32 pp
.
Kistler
,
R.
, and
Coauthors
,
2001
:
The NCEP/NCAR 50-Year Reanalysis.
Bull. Amer. Meteor. Soc.
,
82
,
247
267
.
Liden
,
R.
, and
J.
Harlin
,
2000
:
Analysis of conceptual rainfall-runoff modelling performance in different climates.
J. Hydrol.
,
238
,
231
247
.
Nash
,
J. E.
, and
J. V.
Sutcliffe
,
1970
:
River flow forecasting through conceptual models—Part I: A discussion of principles.
J. Hydrol.
,
10
,
282
290
.
Principe
,
J. C.
,
N. R.
Euliano
, and
W. C.
Lefebvre
,
2000
:
Neural and Adaptive Systems: Fundamentals through Simulations.
John Wiley, 672 pp
.
Salathe
,
E. P.
,
2003
:
Comparison of various precipitation downscaling methods for the simulation of streamflow in a rainshadow river basin.
Int. J. Climatol.
,
23
,
887
901
.
Schoof
,
J. T.
, and
S. C.
Pryor
,
2001
:
Downscaling temperature and precipitation: A comparison of regression-based methods and artificial neural networks.
Int. J. Climatol.
,
21
,
773
790
.
Schubert
,
S.
,
1998
:
Downscaling local extreme temperature changes in south-eastern Australian from the CSIRO Mark2 GCM.
Int. J. Climatol.
,
18
,
1419
1438
.
Schubert
,
S.
, and
A.
Henderson-Sellers
,
1997
:
A statistical model to downscale local daily temperature extremes from synoptic-scale atmospheric circulation patterns in the Australian region.
Climate Dyn.
,
13
,
223
234
.
Semenov
,
M. A.
, and
E. M.
Barrow
,
1997
:
Use of stochastic weather generator in the development of climate change scenarios.
Climate Change
,
35
,
397
414
.
Tatli
,
H.
,
H.
Dalfes
, and
S.
Mente
,
2004
:
A statistical downscaling method for monthly total precipitation over Turkey.
Int. J. Climatol.
,
24
,
161
180
.
von Storch
,
H.
,
B.
Hewitson
, and
L.
Mearns
,
2000
:
Review of empirical downscaling techniques. Regional Climate Development under Global Warming General Tech. Rep. 4, Torbjørnrud, Norway, 29–46
.
Weichert
,
A.
, and
G.
Burger
,
1998
:
Linear versus nonlinear techniques in downscaling.
Climate Res.
,
10
,
83
93
.
Widmann
,
M.
, and
C. S.
Bretherton
,
2000
:
Validation of mesoscale precipitation in the NCEP reanalysis using a new grid-cell dataset for the northwestern United States.
J. Climate
,
13
,
1936
1950
.
Wigley
,
T. M. L.
,
P. D.
Jones
,
K. R.
Briffa
, and
G.
Smith
,
1990
:
Obtaining subgrid scale information from coarse-resolution general circulation model output.
J. Geophys. Res.
,
95
,
1943
1953
.
Wilby
,
R. L.
,
C. W.
Dawson
, and
E. M.
Barrow
,
2002
:
SDSM—A decision support tool for the assessment of regional climate change impacts.
Environ. Modell. Software
,
17
,
147
159
.
Xu
,
C. Y.
,
1999
:
From GCM to River flow: A review of downscaling methods and hydrologic modeling approaches.
Prog. Phys. Geogr.
,
23
,
229
249
.

Footnotes

Corresponding author address: Dr. Paulin Coulibaly, Department of Civil Engineering, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4L7, Canada. Email: couliba@mcmaster.ca