Bias Correction of Climate Modeled Temperature and Precipitation Using Artificial Neural Networks

Sanaz Moghim Department of Civil Engineering, Sharif University of Technology, Tehran, Iran

Search for other papers by Sanaz Moghim in
Current site
Google Scholar
PubMed
Close
and
Rafael L. Bras School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, Georgia

Search for other papers by Rafael L. Bras in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Climate studies and effective environmental management require unbiased climate datasets. This study develops a new bias correction approach using a three-layer feedforward neural network to reduce the biases of climate variables (temperature and precipitation) over northern South America. Air and skin temperature, specific humidity, and net longwave and shortwave radiation are used as inputs to the network for bias correction of 6-hourly temperature. Inputs to the network for bias correction of monthly precipitation are precipitation at lag 0, 1, 2, and 3 months, and also the standard deviation of precipitation from 3 × 3 neighbors around the pixel of interest. The climate model data are provided by the Community Climate System Model, version 3 (CCSM3). Results show that the trained artificial neural network (ANN) can improve the estimation error and correlation of the variables for both calibration and validation periods even when there is a low temporal consistency between the time series of the model data and targets. The developed model is also able to modify the probabilistic structure of the variables although the quantile-based information is not directly considered in the network. The ANN model outperforms linear regression, which is used for comparison purposes. The new method can be used to produce bias-corrected climate variables that can be used as forcing to hydrological and ecological models.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Sanaz Moghim, moghim@sharif.edu

Abstract

Climate studies and effective environmental management require unbiased climate datasets. This study develops a new bias correction approach using a three-layer feedforward neural network to reduce the biases of climate variables (temperature and precipitation) over northern South America. Air and skin temperature, specific humidity, and net longwave and shortwave radiation are used as inputs to the network for bias correction of 6-hourly temperature. Inputs to the network for bias correction of monthly precipitation are precipitation at lag 0, 1, 2, and 3 months, and also the standard deviation of precipitation from 3 × 3 neighbors around the pixel of interest. The climate model data are provided by the Community Climate System Model, version 3 (CCSM3). Results show that the trained artificial neural network (ANN) can improve the estimation error and correlation of the variables for both calibration and validation periods even when there is a low temporal consistency between the time series of the model data and targets. The developed model is also able to modify the probabilistic structure of the variables although the quantile-based information is not directly considered in the network. The ANN model outperforms linear regression, which is used for comparison purposes. The new method can be used to produce bias-corrected climate variables that can be used as forcing to hydrological and ecological models.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Sanaz Moghim, moghim@sharif.edu

1. Introduction

General circulation models (GCMs) simulate the physical processes controlling the mass and energy exchanges among the atmosphere, land, and ocean circulation. They are key tools to study short- and long-term effects of climate change on ecosystems. The highly nonlinear and complex interactions among the land–ocean–atmosphere system are simplified in GCMs, which involve many assumptions, approximations, and parameterizations. These simplifications can produce biases in model outputs. Although the models have improved over time, the biased outputs of these models are still a matter of concern in climate studies (Dai 2001a,b, 2006; IPCC 2007). Chang et al. (2007) illustrated a pattern in the Community Climate System Model, version 3 (CCSM3), with cold and warm biases in the northern and southeastern tropics, respectively. Bonan and Levis (2006) used the Dynamic Global Vegetation Model (DGVM) to evaluate the biases in CCSM3. They reported a severe dry bias in the CCSM3 precipitation over Amazonia, which produced biases in the simulation of vegetation. Thus, bias removal is an essential step prior to using outputs of climate models with land surface hydrologic models for the purpose of climate studies.

Biases of the climate variables are commonly reduced using dynamical or statistical approaches. Bias correction methods can correct intrinsic biases within the dynamics of a model. In these dynamical approaches, the models can be embedded in a data assimilation algorithm to reduce the model forecast biases (Alexander et al. 1998, 1999). For example, remotely sensed data assimilation techniques using visible (VIS), infrared (IR), and microwave images can be used to detect displacement errors of precipitation and storms, which is one of the components of forecast errors (Hoffman et al. 1995; Nehrkorn et al. 2003). Alexander et al. (1998) improved marine cyclones using digital warping of microwave integrated water vapor (IWV). They used the nonhydrostatic Fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model, version 1 (MM5v1), to simulate and forecast cyclones over the North Atlantic Ocean. The results indicated a spatial shift in precipitation and moving cyclone, which are improved by image warping of IWV. Levy et al. (2014a,b) used the warping (medical image registration) technique to correct biases of precipitation for 21 GCMs. The results showed 44% and 55% improvements of RMSE in the intensity and integrated precipitation, respectively. Dynamically, bias correction methods are capable of producing physically consistent climate variables, but the parameterizations and other approximations used in the methods can add additional uncertainties and errors in the model outputs. The difficulties of the dynamical approaches, in particular computational costs, generally hamper their applicability for climate-scale bias correction problems. Statistical approaches are an alternative for large-scale bias correction because of their effectiveness and computational tractability.

The main idea of statistical bias correction methods is to develop a statistical relationship between modeled and observed variables over the same historical period and then use the constructed relationship for the modeled projection. Diagnostic methods commonly use the statistics of the observations (mean, variance, and distribution) to detect and remove biases from the model predictions. One of the most well-known statistical approaches is the delta change method (Ines and Hansen 2006). This method adjusts only the mean of the model variables by shifting (for temperature) or rescaling (for precipitation) the mean of the modeled data based on the mean of the observations during a historically selected baseline period (Sheffield et al. 2006; Horton et al. 2011). Another commonly used statistical method is the quantile-based mapping method. This method can improve not only the mean but also the entire distribution of the variables by mapping the cumulative distribution function (CDF) of the model outputs onto the distribution of the observations (Panofsky and Brier 1968; Cayan et al. 2008; Hayhoe et al. 2004; Maurer and Hidalgo 2008). This mapping is proper for another period if the change in the distributions is not significant, which may not hold. An extended version of the CDF method is the equidistant CDF (EDCDF), which can partially account for the distribution of the model projections in the CDF matching process. The basic idea of the EDCDF method is to find an incremental adjustment based on the difference between the CDF of the observations and model outputs of historical simulations at each percentile (Li et al. 2010). Moghim et al. (2017) used the EDCDF method to develop bias-corrected datasets of climate model outputs over Amazonia. The datasets can be used for driving terrestrial biosphere and hydrological models to study climate change and impact assessment (Zhang et al. 2015).

Chen et al. (2013) compared the sensitivity of the different bias correction methods including mean-based (delta change) and distribution-based (quantile mapping) approaches on hydrological simulations. They used precipitation from four regional climate models (RCMs) provided by NCEP over 10 river basins in North America to drive a lumped conceptual rainfall–runoff model, Service Hydrométéorologique Apports Modules Intermédiaires (HSAMI), developed by Hydro-Québec. To evaluate the performance of the bias correction methods, they compared the simulated streamflow using raw and bias-corrected precipitation. They concluded that the distribution-based bias correction method outperformed the mean-based bias correction approach. The quantile mapping approach was not able to correct the biases of precipitation over the five watersheds. This result was due to the fact that there was a low temporal consistency between the time series of the modeled and observed precipitation over those watersheds. The probability matching methods (e.g., CDF, EDCDF) adjust different quantiles of the modeled distribution according to the corresponding quantiles of the observed distribution, while the modeled value at a certain quantile may not coincide with the observed value of that quantile. Thus, effective implementation of the probability matching methods requires high correlations between the modeled and the observed climate variables.

Another statistical method uses regression models to establish a linear or nonlinear relationship (transfer function) between predictors and predictand, which is mostly used in downscaling methods (Crane and Hewitson 1998; von Storch 1999; Burger 1996). The performance of a nonlinear function outweighs the linear one in a complex system and a complicated relationship between variables. A sophisticated nonlinear regression is artificial neural networks (ANNs), known as universal approximators, which are capable of approximating almost any continuous input and output mapping (Hornik 1989, 1991). Li and Zheng (2003) showed that the practical use of the ANN model is better than other traditional modeling methods because of the ANNs’ processing speed and capability of handling complex nonlinear systems. The robust, fault-tolerant, and parallel structure of the ANN make it a powerful and resilient tool for studies of global climate change and ecology (Liu et al. 2010).

Maier et al. (2010) reviewed 210 journal papers from 1999 to 2007 that developed ANN models for the purpose of flow prediction (quantity and quality) in the river system. Most ANNs used a feedforward network and the gradient-based method for the architecture and training algorithm of the model, respectively. Hsu et al. (1999) estimated rainfall from remotely sensed data using an ANN model over the islands of Japan. The inputs to the model were VIS and IR brightness temperatures at each individual pixel and the mean and standard deviation around the pixel of the 3 × 3 and 5 × 5 VIS and IR brightness temperatures of the surrounding pixels. They used a modified counterpropagation neural network (MCPN) as the architecture of the model. The counterpropagation neural network (CPN) consists of two parts; part one is to categorize the inputs and provide the clusters in the hidden layer, and part two is to establish a function between the clusters and the outputs (Hecht-Nielsen 1990). The MCPN incorporates all inputs in the clusters to estimate different outputs. Hsu et al. (1999) concluded that the MCPN can provide a reasonable spatial and temporal pattern of the rainfall over a small region, where the model was trained. Hsu et al. (1996, 1999) used the MCPN model to develop the Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN) algorithm to estimate precipitation. Inputs to the model are infrared and microwave images of the satellite, radar, ground-based, and rain gauge data.

ANNs can be used to calibrate and improve forecast systems of temperature (Marzban 2003), winds (Kretzschmar et al. 2004), and snow (Roebber et al. 2003). Yuan et al. (2007) used a feedforward neural network to calibrate a probabilistic quantitative precipitation forecast (PQPF) of the NCEP Regional Spectral Model (RSM) ensemble forecast system over the southwestern United States. The inputs to the network were 11 precipitation values from ensemble members and 7 precipitation probabilities calculated at the specific threshold. Although their method improved error, conditional bias, and forecast skill, it predicted more low nonzero probabilities.

Biased model outputs influence analysis studies and impact assessments. Thus, the biases of the outputs need to be corrected. This study develops a data-driven approach using ANNs that produces bias-corrected estimates of climate variables (air temperature and precipitation) from CCSM3. The study domain is described in section 2. Section 3 describes the datasets used in this paper. The methodology is explained in section 4, followed by the experimental setup of the ANN and calibration of the ANN model in sections 5 and 6, respectively. Results are presented in section 7, and discussion and conclusions are given in section 8.

2. Study domain

Our study domain is in northern South America, extending from 23°S to 12°N latitude and from 80° to 35°W longitude (Fig. 1). The region exhibits different land uses such as forest, pasture, agriculture, and bare land. The Amazon basin, with the largest tropical rain forest in the world (covering about 40% of South America), and the Andes Mountains along the west coast are two main landforms in the study domain. The region has been experiencing climate change, including a lengthening of the dry season and a decrease in rainfall (Christensen et al. 2007; Malhi et al. 2009; Costa and Pires 2010). Deforestation, drought, and fire have increased over the domain during the past decades (Butler 2006; see online at https://www.nasa.gov/vision/earth/environment/amazon_deforest.html and http://www.obt.inpe.br/deter/). Past studies using satellite data over Amazonia indicate that the intensity and frequency of droughts in this region are increasing in a way that can change the structure of the entire Amazonian ecosystem (e.g., Bush et al. 2008; Aragao et al. 2008). Changes in land use, vegetation, and climate in the region, including the Amazon basin, the largest tropical forest and reservoir of freshwater and carbon in the world, have an important impact on the hydrological cycle and the entire world’s ecosystems (Cochrane and Barber 2009). As a result, this domain is of primary interest for climate and ecosystem scientists.

Fig. 1.
Fig. 1.

The study domain extending from 23°S to 12°N (latitude) and from 80° to 35°W (longitude). The red circle illustrates the geographical location P.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

3. Datasets

This paper develops a new approach to reduce biases of climate model variables (temperature and precipitation). To illustrate the performance of the proposed method, we use the model outputs from CCSM3, one of the widely used GCMs in climate studies (Meehl et al. 2006; Shin et al. 2003). CCSM3 was developed and maintained by the National Center for Atmospheric Research (NCAR; publicly available at www.cgd.ucar.edu/ccr/strandwg/ccsm_6hr_data.html). The model includes dynamic vegetation growth, death, and succession (Bonan and Levis 2006) and can simulate the effect of land-use change on climate (Collins et al. 2006). The 6-hourly and monthly air temperature T, skin temperature (TS), specific humidity Q, longwave radiation (LW), shortwave radiation (SW), net radiation Rn, surface pressure (PS), zonal u and meridional υ components of horizontal wind, and precipitation P with spatial resolution of about 1.4° (T85) are used in this study. The observations used in this paper are the meteorological forcing dataset (MFD) and the Climatic Research Unit (CRU) dataset. The MFD is a bias-corrected NCEP–NCAR reanalyses product (Sheffield et al. 2006). Sheffield et al. (2006) downscaled the 2° NCEP–NCAR reanalyses product to 1° resolution and then combined it with observation-based datasets such as those from the Global Precipitation Climatology Project (GPCP) and CRU to remove the potential biases in the NCEP data. They created a 1° bias-corrected MFD product for the purpose of improving the results of land surface models (publicly available at hydrology.princeton.edu/data.php). This product can be used as a ground truth benchmark in the absence of long-term observations. Since we want to correct the biases of temperature at the 6-hourly temporal scale, and it is almost impossible to find fine-resolution observations for all grids of the study domain, MFD temperature is used as a reference in this paper. The gridded CRU dataset is recognized as one of the most valid records of climate observations. The dataset is produced by the Climatic Research Unit in the School of Environmental Sciences and the Tyndall Centre at the University of East Anglia, United Kingdom (Harris et al. 2014). The data are reported monthly at a spatial resolution of 0.5° for all landmasses, excluding Antarctica, for the time period from 1901 to 2013 (publicly available at http://www.cru.uea.ac.uk/cru/data/hrg/). Since we want to correct the biases of precipitation at the monthly temporal scale, CRU precipitation is used as the true observation. MFD and CRU data are regridded to the CCSM3 grid (1.4° × 1.4°) using bilinear interpolation.

4. Methodology

The basic idea for bias correction is to find a sufficiently flexible and adaptive approach that is able to learn from available information to develop a predictive function, which performs well for the projection period. We use an ANN approach to learn the bias structure from the historical model outputs and corresponding observations, then the trained network can be used to reproduce bias-corrected predictions.

An ANN imitates the abilities of the human brain, including storing information, learning, and training, to produce a correct response to new or unseen situations (generalization ability). The generalization ability of the ANN increases using various types of information and a large enough network to avoid overfitting to the stored information and experiences. The structure of the ANN is inspired by biological neural systems and is composed of artificial neurons (nodes) in multiple layers. The input layer consists of the input nodes (units) connected to the input variables, hidden layers (one or more), and the output layer consists of the output nodes that deliver the output variables (see Fig. 2). Nodes in the layers are connected by weights (synapses) in different forms (feedforward and recurrent neural networks). The nodes of one layer in a feedforward network are connected in only one forward direction to the nodes of the next layers, while the nodes of different layers in a recurrent network are connected in a directed cycle. The weights determine an effective magnitude of contributed information between nodes. Weights are the main parameters of the ANNs, which can be determined by a training function. Training, which is identified with biological systems, is the process of learning through experiences, examples, and adaptation (Schalkoff 1990). The neural system can establish an implicit computational function and structure through the training process.

Fig. 2.
Fig. 2.

The proposed three-layer feedforward network for bias correction of temperature and precipitation with the (a) hyperbolic tangent and (b) linear transfer functions for the hidden and output layers, respectively. The variables x are the inputs of the network and O is the output, and and denote the weights connecting the input layer i to the hidden layer k to the output layer j, respectively.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Information progresses through the hidden layers of the network. Hidden layers consisting of the hidden nodes (units) provide the parallel and nonlinear structure of the ANN. Activation functions connect input, hidden, and output layers in the network and transfer the summation of the weighted inputs to the nodes in the hidden layers. The details of the process are as follows. The inputs are multiplied by synaptic weights and then fed to the layers through a transfer function (activation function). The transfer function f converts the summation of the synaptic weights vector and inputs vector to a corresponding output vector y as
e1

The sigmoid-type functions (log-sigmoid and tan-sigmoid functions) are the most common activation functions (transfer function or threshold function) to transfer the summation of the weighted inputs into the next hidden layer (Funahashi 1989; White 1990; Hecht-Nielsen 1990; Blum and Li 1991; Takahashi 1993; Hsu et al. 1995). The same process occurs in the other layers until the results of the last hidden layer reach the output layer. ANNs have been used extensively over the years to solve problems in many domains. Further details for the unfamiliar reader can be found in the following comprehensive and survey sources (e.g., Rumelhart 1995; Schalkoff 1997; Hecht-Nielsen 1990; Hassoun 1995; ASCE Task Committee 2000).

A three-layer feedforward network (sometime called two layer) including one input, one hidden, and one output layer (see Fig. 2) is a widely used structure for many applications (e.g., Rumelhart 1995; Hsu et al. 1995; Maier and Dandy 1998; Joorabchi et al. 2007; May and Sivakumar 2009). Although ANNs do not represent the physical processes involved, they are able to identify complex and nonlinear relationships between predictors and response variables (nonlinear input–output mapping). An ANN is characterized by a stimuli–response (SR) process to learn the right response for each input through training (Schalkoff 1997). The overall characteristics of the network can be represented as
e2
where is a response (output); is a stimuli (input); denotes the interconnection weights between the nodes; and is a combination of unknown characteristics of the network including structure of the network, number of the hidden layers and nodes, transfer function, training function, and learning rate. In this work, the response is the bias-corrected climate variables (temperature and precipitation) and the stimuli include a set of proper meteorological variables, which needs to be determined.

5. Experimental setup of the ANN

The architecture of ANNs is important to their performance. A smaller network with few neurons and hidden layers severely restricts the network’s learning ability, while a larger network with too many neurons and hidden layers typically leads to overfitting and poor generalization of the network. To deliver a valid output, a training algorithm including transfer and training functions processes the inputs in the network and adjusts the weights continually. Although the mapping power of the ANN is not limited to the specific transfer function (Hornik 1991), some transfer functions can be more appropriate for certain types of applications. A trial-and-error procedure determined that one hidden-layer feedforward neural network with the hyperbolic tangent (tan-sigmoid function) and linear transfer functions for the hidden and output layers is a good choice to construct a model for bias correction of temperature and precipitation (see Fig. 2). This network is the most common network in many water resources applications and forecasting of climate variables (Khotanzad et al. 1997; Maier and Dandy 2000; Rumelhart 1995; Durbin and Rumelhart 1989). The detailed description of the process to determine the proper number of hidden nodes for temperature and precipitation networks is provided in section 6.

A transformation of the effective weighted information between the nodes in the input, hidden, and output layers is essential to achieve a desired input–output relationship. To find a best set of weights (main parameters of the network), the network is trained with the target (observation) by minimizing the quadratic error function E (the objective function) as
e3
where O is the output from the output layer, Tar is the corresponding target, and is the input vector. This work uses the back propagation generalized delta rule (BPGDR) algorithm, as a supervised learning or training method to adjust the weights of the network in response to the set of input and target values. The BPGDR algorithm is the most common algorithm for the training of ANNs (Maier and Dandy 2000). The training phase is a type of teaching process to correct or adjust the parameters of the system iteratively among the past steps to achieve a desired performance in the next iteration. Furthermore, after training, the internal structure of the network is able to self-organize to react properly to the unseen inputs. This feature is called generalization capability. The BPGDR algorithm, a gradient descent optimization technique, for the network with three layers can be expressed as
e4
where is the gradient or differential operator and i, k, and j refer to the nodes in the input, hidden, and output layers, respectively. Learning rate η is used to control the rate of learning (change of the error relative to the weight change) in each iteration. This parameter determines the step size in the steepest descent optimization algorithm for obtaining the optimal weights of the network. For more comprehensive descriptions of the BPGDR method, the reader is referred to Rumelhart et al. (1986), Rocha et al. (2005), and Moghim (2015).

Appropriate selection of the predictors (inputs) related to the response variables (outputs) is very important for an effective ANN. Highly correlated and unnecessary inputs influence the performance of the network and generally degrade the generalization capability of the model. Unnecessary inputs also increase the complexity, uncertainty of the model, computational time, and memory usage (Maier et al. 2010; Lachtermacher and Fuller 1994; Taormina et al. 2012). The choice of a suitable set of inputs and outputs is aided by a priori knowledge of existing relationships between them.

The primary goal of the temperature network is to reduce the biases of the 6-hourly surface air temperature. Thus, a bias-corrected time series of T is set as the output of the ANN model ( in Fig. 2). The network is trained using the target temperature (observations), here the MFD. The input vectors include the raw temperature and physically relevant climate variables (predictors) that could have impact on the bias of temperature. A stepwise approach is used to determine a best set of inputs. In this approach, the network starts with the minimum number of inputs (one). In each step, one variable is added to the input vector and the network is retrained. The process of adding variables continues until the performance of the network [error function, Eq. (3)] stops improving (Master 1993; Maier and Dandy 1998). In the first step, we started with only one input (T, TS, Q, , , Rn, P, or PS; where n indicates net) and trained the network separately. The best performance among the one-variable inputs is obtained with T as the input. In the next steps, the other climate variables are added to the input vector one at a time and the same process was repeated. For instance, in step 2 (two-variable inputs), the best performance of the network is obtained using T and TS as inputs. Parameter Q is the next variable that improves the performance of the network in the third step (three-variable inputs). Finally the best performance of the network for the bias correction of temperature is obtained by using T, TS, Q, , and as inputs ( in Fig. 2).

The objective of the precipitation network is to provide bias-corrected monthly precipitation so a bias-corrected time series of P is selected as the output of the network ( in Fig. 2). We used the CRU precipitation as the target to train the network. The stepwise approach indicates that adding variables (e.g., T, TS, Q, , , Rn, PS, u, and υ) to the raw precipitation (original model precipitation) did not improve the performance of the precipitation network. Thus, the best performance of the network is achieved when precipitation is selected as a predictor for itself, in other words, adding other variables to precipitation as inputs do not reduce the error function [see Eq. (3)]. This conclusion is consistent with the results obtained by Hidalgo et al. (2008). Since the best predictor of precipitation is itself, the effect of adding time-lagged precipitation to the input is examined. The performance of the network improves when 0-, 1-, 2-, and 3-month lags of precipitation are used as inputs. Since the variability of precipitation can be large over space, we also checked the performance of the network when information from surrounding pixels is added to the existing inputs. To achieve that goal, the mean and standard deviation of precipitation from the surrounding pixels (n = 3, 5, or 7) are added to the input set as
e5
and
e6
where N is the total number of the surrounding pixels. The network showed the best improvement when the standard deviation of precipitation from 3 × 3 neighbors around the pixel of interest is added to the inputs. Using variability of precipitation from neighborhood pixels can adjust a possible displacement error of precipitation. As a result, , , , , and are selected as the proper set of inputs to the network for bias correction of precipitation ( in Fig. 2).

6. Calibration of the ANN

Once an appropriate set of inputs, structure, training algorithm, and the number of the hidden layers for the temperature and precipitation networks are set, the number of hidden nodes (hn) and the learning rate need to be determined. A trial-and-error procedure is followed to find the unknown parameters. The detailed description of the process is explained for an arbitrary geographical location “P” (latitude 14.71°S, longitude 45°W) in the study domain (see red circle in Fig. 1) for temperature and precipitation. Note that the results of calibration for location P are consistent with the many other locations that were tested.

Previous experience with bias correction of climate model outputs has been largely focused on monthly temporal scales (e.g., Wood et al. 2004; Li et al. 2010; Zhang and Georgakakos 2012; Chen et al. 2013; Teutschbein and Seibert 2012; Berg et al. 2003). We intend to develop an effective methodology to reduce biases in the modeled temperature on a much finer temporal resolution (6 h). To train and test the network, the 6-hourly historical CCSM3 data (ANN input) and MFD temperature (ANN target) from 1970 to 2008 are divided into two separate periods; the years 1970–88 are used as a training dataset to adjust the ANN weights (calibration) and the years 1989–2008 are used as a testing dataset to study the performance of the trained network (validation). Bias correction of precipitation at the fine scale (e.g., 6 h) is not feasible because there are many zero values (dry hours) that make it difficult for ANN to construct a proper relationship between the input and output. The seasonal historical CCSM3 monthly precipitation (ANN input) and CRU precipitation (ANN target) from 1901 to 2013 are divided into two independent periods; 1901–56 as a calibration period (training dataset) and 1957–2013 as a validation period. Note that the validation data are only used for comparison purposes to check the capability of the network to perform well on a new set of inputs (generalization ability).

The effect of the number of the hidden nodes and the learning rate on the performance of the proposed ANN is presented for location P in March and the season March–May (MAM) for temperature and precipitation, respectively (results were similar for all other months and seasons). As a first guess for the learning rate, η = 0.01 is selected and the constructive algorithm is used to find the optimal number of hidden nodes (Kwok and Yeung 1997). It is worth noting that a similar learning rate is often reported in ANN applications in water resource studies (e.g., Tamura and Tateishi 1997; Kuligowski and Barros 1998a,b). In the first step, five hidden nodes are used and given the prescribed characteristics of the network (i.e., ac); the network is trained for the calibration period to find a set of weights that satisfy a desired response relative to the target. In the next steps, hidden nodes are added and the network is retrained. The number of hidden nodes was varied from 5 to 35 for temperature and from 5 to 11 for precipitation. To evaluate the performance of the network with different hidden nodes, the mean-square error (MSE) and correlation ρ between the estimated ANN outputs and corresponding observations are calculated for both calibration and validation periods (see Figs. 3, 4).

Fig. 3.
Fig. 3.

Performance of the ANN for different numbers of the hidden nodes. (a) MSE and (b) correlation of the ANN outputs with the targets. The blue bars show the statistics for the calibration (March 1970–88) and red bars denote the results for the validation (March 1989–2008) at location P. The blue solid and red dashed lines denote the statistics between the input (original temperature) and target values for the calibration (Cal) and validation (Val), respectively.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Fig. 4.
Fig. 4.

Performance of the ANN for different numbers of the hidden nodes. (a) MSE and (b) correlation of the ANN outputs with the targets. The blue bars show the statistics for the calibration (MAM 1901–56) and the red bars denote the results for the validation (MAM 1957–2013) at location P. The blue solid and red dashed lines denote the statistics between the input (original precipitation) and target values for the calibration (Cal) and validation (Val), respectively.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

The horizontal lines in Figs. 3 and 4 correspond to the MSE and ρ between original input CCSM3 (temperature and precipitation) and target values (the blue solid lines for the calibration and the red dashed lines for the validation). The bars below the lines in Figs. 3a and 4a and above the lines in Figs. 3b and 4b show the ranges of the hidden nodes that improve the results in the MSE and ρ sense, respectively. The results indicate that the ANNs with 5 and 10 hidden nodes perform the best in terms of the MSE and ρ for the temperature neural network. For the precipitation network, all hidden nodes are able to reduce the original MSE and increase the correlation, but the network with eight hidden nodes leads to the smallest MSE and the highest correlation in both calibration and validation. A close scrutiny of Fig. 4 reveals that the performance of the network in calibration and validation periods is close when a fewer number of hidden nodes is used. In addition, the larger number of hidden nodes increases complexity of the model, computational time, and memory usage in the process of weight optimization. Thus, a better network is the one that performs well with a fewer number of hidden nodes. The results indicate that eight nodes are the minimum number of hidden nodes that the precipitation network needs to get the smallest MSE and the highest correlation for both calibration and validation periods.

One concern is that ANN outputs may be overly smooth with less variability than the underlying process. To assure that the ANN outputs preserve the variability of the variable time series and the results are not overly smooth, we use the signal-to-noise ratio (SNR) metric as
e7
where and are standard deviations of the ANN outputs and targets, respectively. A larger number of hidden nodes increases the standard deviation of the network outputs. The SNRs for different numbers of hidden nodes are shown in Figs. 5 and 6 for temperature and precipitation, respectively.
Fig. 5.
Fig. 5.

SNR for different hidden nodes in the temperature network. The blue circles denote the calibration period (Cal) and the red plus signs denote the validation period (Val) at location P.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for the precipitation network.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Figure 5 shows that hn = 10 yields the best SNR for the temperature network. Thus, we select the number of hidden nodes hn = 10 for which the MSE is sufficiently small, while ρ and SNR are large. Figure 6 illustrates that eight hidden nodes lead to a higher SNR than five, six, or seven nodes for the precipitation network. On the other hand, the temporal variability of the outputs is not very sensitive to the number of hidden nodes when the number of nodes exceeds seven, particularly for the validation period.

The hn is set to be 10 for the temperature network, and the performance of the network is then evaluated for different learning rates (see Fig. 7). Figure 7 shows that the network with different ranges of learning rates can improve the results in both terms of the MSE (bars are below the reference lines in Fig. 7a) and correlation metrics (bars are above the reference lines in Fig. 7b), and the performance of the network is not very sensitive to learning rates smaller than 0.01. Note that a small learning rate slows down the training process and a large learning rate may cause the network to oscillate around the optimal solution. Thus, η = 0.01 with the smallest MSE is a reasonable choice to train the network. Similar to the temperature network, different ranges of learning rates have a small impact on the performance of the precipitation network (the figure is not shown). Therefore, we use η = 0.01 and hn = 8 for the precipitation network for which the MSE and correlation are in their minimal and maximal positions, respectively (results are similar for other months and seasons).

Fig. 7.
Fig. 7.

As in Fig. 3, but for different learning rates.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

The performance of the proposed ANN is illustrated in the scatterplots of Figs. 8 and 9 at location P for temperature and precipitation, respectively.

Fig. 8.
Fig. 8.

(top) Target vs input temperature before bias correction (red plus signs) and target vs ANN output temperature (blue circles) at location P for the (a) calibration and (b) validation. (bottom) CDFs of the temperature values for inputs (blue dotted line), targets (red solid line), and ANN outputs (green dashed line) for the (c) calibration and (d) validation.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Fig. 9.
Fig. 9.

As in Fig. 8, but for precipitation.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Figures 8a, 8b, 9a, and 9b show that the ANN outputs (bias-corrected temperature and precipitation) are in close agreement with the target ones. The correlations between the MFD and CCSM3 temperatures before bias correction (target input) are 0.52 and 0.50, while these values are improved to 0.76 and 0.75 by the ANN model in calibration and validation periods, respectively. The correlations between the CRU and CCSM3 precipitation before bias correction (target input) are 0.56 and 0.46, while these values are improved to 0.76 and 0.66 by the ANN model in calibration and validation periods, respectively. The results indicate that the trained network noticeably increases the original correlations and improves estimates of air temperature and precipitation. To shed more light on the performance of the ANN approach in a distribution sense, the CDF of the input (In), MFD (Target), and ANN values (Output) are illustrated in Fig. 8 (bottom) and Fig. 9 (bottom) for temperature and precipitation, respectively.

It is evident from Fig. 8 (bottom) and Fig. 9 (bottom) that the ANN outputs can preserve the probabilistic structure of the target and the CDF of the ANN outputs follows the CDF of the target closely. The CDF plots show that although the ANN does not include any probabilistic distribution mapping technique, it can preserve the probabilistic structure of the target.

For a complete assessment of the ANN model, a linear regression model and the most widely used distribution-based approaches, the CDF and EDCDF methods, are used to correct the biases of the CCSM3 temperature and precipitation at location P. The linear (LR) model is similar to the ANN model with only input–output layers (without any hidden layer) and the linear transfer function. The predictor variables of the ANN model were also optimal inputs for the LR. The calibration and validation datasets are used to construct the CDFs of the historical model and model projection outputs, respectively. A full statistical comparison between all of these methods is presented in Tables 1 and 2.

Table 1.

Performance of the CDF, EDCDF, LR, and ANN to correct the biases of temperature in terms of the MSE (K2), bias (K), ρ, and KS for the calibration (March 1970–88) and validation (March 1989–2008) at location P. The first row (In) refers to the original statistics before bias correction.

Table 1.
Table 2.

Performance of the CDF, EDCDF, LR, and ANN to correct the biases of precipitation in terms of the MSE (mm2), bias (mm), ρ, and KS for the calibration (MAM 1901–56) and validation (MAM 1957–2013) at location P. The first row (In) refers to the original statistics before bias correction.

Table 2.

The results in Tables 1 and 2 indicate that the regression methods, in particular ANN, can decrease the MSE/bias and increase correlation considerably in both calibration and validation periods. For a complete assessment of the regression models’ performance, the percent improvement (Imp) of each statistic is calculated as
e8
where A refers to the statistic (MSE, bias, ρ, or KS). Parameter denotes statistics of the original CCSM3 variables (temperature or precipitation) relative to the observations, and denotes statistics of the regression model output (LR and ANN) relative to the observations.

The temperature network improves the MSE by 70% and 71% and increases the original correlation by 46% and 50% for calibration and validation periods, respectively. In comparison, the improvements in the MSE from the LR are 57% and 59%, and the improvements of the correlation are 35% and 40% in the calibration and the validation, respectively. The precipitation network improves the MSE by 52% and 51%, while it increases the original correlation by 36% and 43% for calibration and validation periods, respectively. The improvements of the MSE by the LR are 37% and 36% and the improvements of the correlations are 20% and 17%, for calibration and validation periods, respectively. The results show that the distribution-based methods do not do well improving the MSE/biases of precipitation and the correlation of the temperature and precipitation time series. The distribution-based approach is based on the mapping of the historical modeled variables onto the observed ones at each percentile. The modeled value at a certain quantile may not coincide with the observed value of that quantile; thus, this mapping approach does not guarantee a decrease in the MSE or an increase in the correlation of the results.

To quantify the difference between the estimates in a distribution sense, the Kolmogorov–Smirnov test (KS) is provided in the last column of the tables. KS is a measure of differences between the CDFs, which determines the capability of an estimated distribution to approximate a target distribution. The smaller the values of KS, the closer the CDFs are. The smaller KS obtained from CDF and EDCDF methods indicate that estimated distributions resulting from those methods are closer to the distribution of the target values. This improvement of the KS is due to the fact that the CDF and EDCDF methods are based on the mapping of the distribution of the variables onto the observed ones. However, while the quantile-based mapping methods somewhat outperform the regression models in the sense of KS for the calibration period (as expected), all methods show a comparable performance in a distributional sense for the validation period (KS for all methods are close in the validation).

The closeness of the improvements in calibration and validation periods by the regression models indicates that the trained model has the capability to perform well with new and unseen data (generalization ability of the ANN). This generalization capability of the network is vital since the main objective of the proposed method is to construct a trained network that performs well on a new set of inputs. Note that the generalization capability of the ANN in this work presumes that the fundamental atmospheric physics captured by the CCSM3 remains the same during calibration and validation periods, as well as during any other time. The bias correction under future climate will be as good as the robustness of the physical representation of atmospheric processes in the CCSM3. The CCSM3 outputs are inputted to the ANN model, in a postprocessing step, to reduce the biases of temperature and precipitation. The ANN is constructed based on the assumption that the nature of the CCSM3 has similar systematic errors in simulating temperature and precipitation over different time periods. Thus, the derived relationship between CCSM3 outputs and bias-corrected temperature and precipitation remains time invariant. The performance of the network depends on the richness of the training set in spanning biases resulting from various meteorological conditions.

The best hn and η were corroborated for many locations in the study domain. The results confirmed that η = 0.01 and hn = 10 (for temperature) or 8 (for precipitation) are reasonable choices for our networks to reduce the biases of temperature and precipitation values. Therefore, we use those values for all locations in the domain.

7. Results

The capability of the constructed ANN to reduce the biases of the CCSM3 temperature and precipitation is evaluated pixel by pixel (1.4° × 1.4°) over the entire study domain (23°S–12°N, 80°–35°W; Fig. 1) for calibration and validation periods. The CCSM3 output (inputs of the regression models), MFD temperature, and CRU precipitation (target) are divided into two separate calibration and validation periods (section 6). The performance of the regression models to improve the statistics (MSE, bias, ρ, and KS) of temperature and precipitation [see Eq. (8)] during calibration and validation periods is illustrated in Figs. 10 and 11 for March and MAM, respectively.

Fig. 10.
Fig. 10.

(from top to bottom) Improvements of the MSE (ImpMSE), bias (ImpBias), ρ (Impρ), and KS (ImpKS) by the linear (LR) and nonlinear (ANN) methods for T during March using the calibration (first and second columns) and the validation (third and fourth columns) datasets.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Fig. 11.
Fig. 11.

As in Fig. 10, but for P during MAM.

Citation: Journal of Hydrometeorology 18, 7; 10.1175/JHM-D-16-0247.1

Figures 10 and 11 indicate that although both LR and ANN models are able to improve the results for all pixels in the study domain, the ANN is consistently better than LR in improving the MSE, bias, ρ, and KS for most pixels in the study domain in both calibration [Figs. 10 (first and second columns) and Figs. 11 (first and second columns)] and validation [Figs. 10 (third and fourth columns) and Figs. 11 (third and fourth columns)] periods. The trained network performs well on the validation dataset, which is independent of the calibration, so we can conclude that the developed ANN has generalization capability. The most substantial improvements in MSE by both LR and ANN models [shown with the red color in Fig. 10 (first row) and Fig. 11 (first row)] occur over areas of the domain where CCSM3 fails to simulate temperature and precipitation well. Indeed, biases in the CCSM3 simulations over those areas are high and can be improved by the regression models remarkably well. The correlation of the estimated field with the targets is improved by the regression models, in particular by the ANN. The blue color over the northern and western parts of the domain in Fig. 11 (third row) shows the areas with high original correlation for precipitation. Even the high original correlation over most parts of the domain can be increased by the regression models, particularly by the ANN. The regression models can remarkably increase very low original correlation [see the red color in Fig. 11 (third row)]. The KS is also improved considerably by the ANN. Smaller improvements of the error or KS over some regions (shown with the blue color) are due to the fact that the original error or KS between the input and the target is small. While the LR shows a comparable performance to the ANN, the ANN yields better results in terms of MSE, bias, ρ, and particularly KS for temperature and precipitation in both calibration [Figs. 10 (first and second columns) and Figs. 11 (first and second columns)] and validation [Figs. 10 (third and fourth columns) and Figs. 11 (third and fourth columns)] periods. Similar improvements of the statistics in both calibration and validation periods reveal that the trained model has the ability to perform well when the observation is not available.

The regression models, in particular ANN, can reduce the biases of the CCSM3 temperature and precipitation and improve their time series in the correlation and distributional sense for all months and seasons (the figures are not shown). Tables 3 and 4 summarize the overall domain-average percent improvements of the statistics by the LR and ANN models for both calibration and validation periods in all months and seasons. The results confirm that the ANN is able to construct a reliable (robust) relationship between the input and output to reproduce bias-corrected temperature and precipitation. However, all statistics of the bias-corrected variables are improved significantly; the higher improvement of correlation for precipitation indicates the original correlation of CCSM3 precipitation is low and the developed model is able to significantly increase it.

Table 3.

Domain-average percent improvements of the MSE, bias, ρ, and KS of temperature for the calibration (1970–88) and validation (1989–2008).

Table 3.
Table 4.

Domain-average percent improvements of MSE, bias, ρ, and KS of precipitation for the calibration (1970–88) and validation (1989–2008).

Table 4.

8. Discussion and conclusions

This study develops a new method to correct the biases of climate variables (temperature and precipitation) using an artificial neural network. A three-layer feedforward neural network is able to construct a valid relationship between a set of inputs and outputs (bias-corrected temperature or precipitation). The developed model trains the network during the calibration period and then uses the constructed network for bias correction of the entire period (calibration and validation periods). The trained network has temporal generalization ability to perform well with new and unseen data in the validation period. The model uses a baseline period to derive a proper input–output relationship and then applies it to the validation period. The ANN model assumes that the nature of the CCSM3 systematic errors in simulating temperature and precipitation do vary in space and throughout the year, but it does not change over the different time periods tested: calibration and validation (Moghim et al. 2017). Thus, the derived relationship between CCSM3 outputs and bias-corrected temperature and precipitation remains invariant. The good performance of the model in the two time periods seems to indicate that this assumption is reasonable and that the nonlinear structure of the bias correction model is robust. This assumption would not be valid if in the future the physics of the climate model were to be inappropriate to represent a new climate regime. Hopefully that is not the case, but if it was, the ANN bias correction would need to be recalibrated.

The skill of CCSM3 to simulate climate variables depends on the validity of the schemes and parameterizations used in each month and location. The Andes Mountains over the west coast of the study domain are one of the regions where CCSM3 shows a large error in temperature and precipitation. This large error extends to the other regions in different months. Although the ANN improved the results over the entire study domain, the larger improvements of MSE, KS, or ρ occurred over the regions that have larger original error or KS or smaller original ρ. The reduction of the error is more noticeable over the regions that have high original correlation between the modeled data and the target. Since the ANN model is a data-driven approach, a high correlation can be an important factor in constructing a proper relationship between the input and output.

The ANN method identifies indicator or explanatory variables for the climate model systematic errors that make physical sense. Variables like air temperature, skin temperature, specific humidity, and net longwave and shortwave radiation as inputs to the ANN best reduced the temperature biases. Zero-, one-, two-, and three-month lags of precipitation and the standard deviation from 3 × 3 neighbors around the pixel of interest were the best predictor inputs to reduce the precipitation biases. The use of the standard deviation of precipitation from the neighboring pixels improves the precipitation error. It can also adjust the variance of precipitation at each pixel relative to the neighbors. Since the change of temperature is smooth in time and space, adding lag-time temperature or variance of temperature from the neighbors does not improve the results.

The regression models (LR and ANN) are able to improve all statistics, including MSE, bias, ρ, and KS, of the results in both calibration and validation periods. The main feature of the ANN method is that it can improve all metrics (statistics) even when the original correlation between the modeled data and the observations is low. Although the regression models do not directly take into account any specific information about the probabilistic distribution of the underlying variables, they can reproduce the distribution of the observed values during calibration and validation. The ANN outperforms the LR model in all statistics significantly (p value is extremely smaller than 0.01). The average improvements of the MSE, bias, ρ, and KS over all months and the entire domain for temperature are 76.33%, 52.5%, 29.75%, and 73% by the ANN and 68.92%, 45.08%, 20%, and 60.92% by the LR, respectively. The average improvements for precipitation are 70.75%, 55%, and 56.5% by the ANN and 63.25%, 46.5%, and 40.75% by the LR, respectively.

Accurate and high-resolution climate datasets are essential for effective and efficient long-term environmental management and climate change assessment. This study suggests a methodology (using the ANN) to correct the biases of modeled climate variables (temperature and precipitation). Although the proposed method is computationally advantageous over dynamically downscaling methods, the constructed statistical relationships between predictors and predictands do not infer the underlying physical relationships between variables. The flexible and powerful predictive capacity of the ANN allows us to effectively employ the trained model for bias correction of climate variables in the projection period. The parallel structure of the neural network is fault tolerant and self-organized. Furthermore, after training, the internal structure of the network is able to respond properly to the unseen inputs. Thus, the neural network is able to train, adapt, and self-organize information. The good performance of the proposed model over northern South America spanning different topographies, climate, and land uses (e.g., forest, pasture, agriculture, bare land, and mountain) points to the potential of this method as a bias correction approach over many other regions of the world. The performance of the ANN can be improved using other training (learning) approaches, which are able to determine the structure of the ANN and the weights simultaneously (e.g., stochastic approaches such as genetic and simulated annealing methods). Future research should be devoted to developing the ANN model to correct the biases of the climate variables like temperature and precipitation simultaneously.

Acknowledgments

This research was sponsored by the NASA Precipitation Measurement Mission (PMM) science program through Grants NNX10AG84G, NNX11AQ33G, and NNX13AH35G. The support by the K. Harrison Brown Family Chair is also gratefully acknowledged. We recognize Drs. Sheffield, Goteti, and Wood for providing the bias-corrected NCEP–NCAR reanalyses product and also modeling groups including Community Climate System Model (CCSM3), National Center for Atmospheric Research (NCAR), and Climatic Research Unit (CRU) for making datasets publicly available. We are thankful to Drs. Aris P. Georgakakos, Kuo-lin Hsu, Mohammad Ebtehaj, and Shawna L. McKnight for their help and suggestions in this work.

REFERENCES

  • Alexander, G. D., J. A. Weinman, and J. L. Schols, 1998: The use of digital warping of microwave integrated water vapor imagery to improve forecasts of marine extratropical cyclones. Mon. Wea. Rev., 126, 14691496, doi:10.1175/1520-0493(1998)126<1469:TUODWO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Alexander, G. D., J. A. Weinman, V. M. Karyampudi, W. S. Olson, and A. C. L. Lee, 1999: The effect of assimilating rain rates derived from satellites and lightning on forecasts of the 1993 superstorm. Mon. Wea. Rev., 127, 14331457, doi:10.1175/1520-0493(1999)127<1433:TEOARR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Aragao, L., Y. Malhi, N. Barbier, L. Anderson, S. Saatchi, and E. Shimabukuro, 2008: Interactions between rainfall, deforestation and fires during recent years in the Brazilian Amazonia. Philos. Trans. Roy. Soc. London, B363, 17791785, doi:10.1098/rstb.2007.0026.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • ASCE Task Committee, 2000: Artificial neural networks in hydrology. I: Preliminary concepts. J. Hydrol. Eng., 5, 115137, doi:10.1061/(ASCE)1084-0699(2000)5:2(115).

    • Search Google Scholar
    • Export Citation
  • Berg, A. A., J. S. Famiglietti, J. P. Walker, and P. R. Houser, 2003: Impact of bias correction to reanalysis products on simulations of North American soil moisture and hydrological fluxes. Geophys. Res. Lett., 108, 4490, doi:10.1029/2002JD003334.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blum, E. K., and L. K. Li, 1991: Approximation theory and feedforward networks. Neural Networks, 4, 511515, doi:10.1016/0893-6080(91)90047-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bonan, G. B., and S. Levis, 2006: Evaluating aspects of the Community Land and Atmosphere Models (CLM3 and CAM3) using a Dynamic Global Vegetation Model. J. Climate, 19, 22902301, doi:10.1175/JCLI3741.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Burger, G., 1996: Expanded downscaling for generating local weather scenarios. Climate Res., 7, 111128, doi:10.3354/cr007111.

  • Bush, M. B., M. R. Silman, C. McMichael, A. Restrepo-Correa, D. H. Urrego, A. Correa, and S. Saatchi, 2008: Fire, climate change and biodiversity in Amazonia: A late-Holocene perspective. Philos. Trans. Roy. Soc. London, B363, 17951802, doi:10.1098/rstb.2007.0014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Butler, R., 2006: Amazon destruction. Accessed 22 May 2017. [Available online at http://rainforests.mongabay.com/amazon/amazon_destruction.html.]

  • Cayan, D. R., E. P. Maurer, M. D. Dettinger, M. Tyree, and K. Hayhoe, 2008: Climate change scenarios for the California region. Climatic Change, 87 (Suppl.), 2142, doi:10.1007/s10584-007-9377-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chang, C.-Y., J. A. Carton, S. A. Grodsky, and S. Nigam, 2007: Seasonal climate of the tropical Atlantic sector in the NCAR Community Climate System model 3: Error structure and probable causes of errors. J. Climate, 20, 10531070, doi:10.1175/JCLI4047.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, J., F. P. Brissette, D. Chaumont, and M. Braun, 2013: Finding appropriate bias correction methods in downscaling precipitation for hydrologic impact studies over North America. Water Resour. Res., 49, 41874205, doi:10.1002/wrcr.20331.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Christensen, J. H., and Coauthors, 2007: Regional climate projections. Climate Change 2007: The Physical Science Basis, S. Solomon et al., Eds., Cambridge University Press, 847–940.

  • Cochrane, M. A., and C. P. Barber, 2009: Climate change, human land use and future fires in the Amazon. Global Change Biol., 15, 601612, doi:10.1111/j.1365-2486.2008.01786.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Collins, W. D., and Coauthors, 2006: The Community Climate System Model version 3 (CCSM3). J. Climate, 19, 21222143, doi:10.1175/JCLI3761.1.

  • Costa, M. H., and G. F. Pires, 2010: Effects of Amazon and central Brazil deforestation scenarios on the duration of the dry season in the arc of deforestation. Int. J. Climatol., 30, 19701979, doi:10.1002/joc.2048.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crane, R. G., and B. C. Hewitson, 1998: Doubled CO2 precipitation changes for the Susquehanna basin: Down-scaling from the GENESIS general circulation model. Int. J. Climatol., 18, 6576, doi:10.1002/(SICI)1097-0088(199801)18:1<65::AID-JOC222>3.0.CO;2-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dai, A., 2001a: Global precipitation and thunderstorm frequencies. Part I: Seasonal and interannual variations. J. Climate, 14, 10921111, doi:10.1175/1520-0442(2001)014<1092:GPATFP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dai, A., 2001b: Global precipitation and thunderstorm frequencies. Part II: Diurnal variations. J. Climate, 14, 11121128, doi:10.1175/1520-0442(2001)014<1112:GPATFP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dai, A., 2006: Precipitation characteristics in eighteen coupled climate models. J. Climate, 19, 46054630, doi:10.1175/JCLI3884.1.

  • Durbin, R., and D. E. Rumelhart, 1989: Product units: A computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput., 1, 133142, doi:10.1162/neco.1989.1.1.133.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Funahashi, K. I., 1989: On the approximation realization of continuous mapping by neural networks. Neural Networks, 2, 183192, doi:10.1016/0893-6080(89)90003-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Harris, I., P. D. Jones, T. J. Osborn, and D. H. Lister, 2014: Updated high-resolution grids of monthly climatic observations the CRU TS3.10 dataset. Int. J. Climatol., 34, 623642, doi:10.1002/joc.3711.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hassoun, M. H., 1995: Fundamentals of Artificial Neural Networks. MIT Press, 511 pp.

  • Hayhoe, K., and Coauthors, 2004: Emissions pathways, climate change, and impacts on California. Proc. Natl. Acad. Sci. USA, 101, 12 42212 427, doi:10.1073/pnas.0404500101.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hecht-Nielsen, R., 1990: Neurocomputing. Addison-Wesley,433 pp.

  • Hidalgo, H. G., M. D. Dettinger, and D. R. Cayan, 2008: Downscaling with constructed analogues: Daily precipitation and temperature fields over the United States. California Energy Commission PIER Final Project Rep. CEC-500-2007-123, 48 pp. [Available online at http://www.energy.ca.gov/2007publications/CEC-500-2007-123/CEC-500-2007-123.PDF.]

  • Hoffman, R., Z. Liu, J. Louis, and C. Grassotti, 1995: Distortion representation of forecast errors. Mon. Wea. Rev., 123, 27582770, doi:10.1175/1520-0493(1995)123<2758:DROFE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hornik, K., 1989: Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359366, doi:10.1016/0893-6080(89)90020-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hornik, K., 1991: Approximation capabilities of multilayer feedforward networks. Neural Networks, 4, 251257, doi:10.1016/0893-6080(91)90009-T.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Horton, R. M., V. Gornitz, D. A. Bader, A. C. Ruane, R. Goldberg, and C. Rosenzweig, 2011: Climate hazard assessment for stakeholder adaptation planning in New York City. J. Appl. Meteor. Climatol., 50, 22472266, doi:10.1175/2011JAMC2521.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K. L., H. V. Gupta, and S. Sorooshian, 1995: Artificial neural network modeling of the rainfall–runoff process. Water Resour. Res., 31, 25172530, doi:10.1029/95WR01955.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K. L., X. Gao, S. Sorooshian, and H. V. Gupta, 1996: Precipitation estimation from remotely sensed information using artificial neural networks. J. Appl. Meteor., 36, 11761190, doi:10.1175/1520-0450(1997)036<1176:PEFRSI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hsu, K. L., H. V. Gupta, X. Gao, and S. Sorooshian, 1999: Estimation of physical variables from multichannel remotely sensed imagery using a neural network: Application to rainfall estimation. Water Resour. Res., 35, 16051618, doi:10.1029/1999WR900032.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ines, A. V. M., and J. W. Hansen, 2006: Bias correction of daily GCM rainfall for crop simulation studies. Agric. For. Meteor., 138, 4453, doi:10.1016/j.agrformet.2006.03.009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • IPCC, 2007: Climate Change 2007: The Physical Science Basis. Cambridge University Press, 996 pp.

  • Joorabchi, A., H. Zhang, and M. Blumenstein, 2007: Application of artificial neural networks inflow discharge prediction for the Fitzroy River, Australia. J. Coastal Res., SI50, 287291.

    • Search Google Scholar
    • Export Citation
  • Khotanzad, A., R. Afkhami-Rohani, T.-L. Lu, A. Abaye, M. Davis, and D. J. Maratukulam, 1997: ANNSTLF—A neural-network-based electric load forecasting system. IEEE Trans. Neural Networks, 8, 835846, doi:10.1109/72.595881.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kretzschmar, R., P. Eckert, D. Cattani, and F. Eggimann, 2004: Neural network classifiers for local wind prediction. J. Appl. Meteor., 43, 727738, doi:10.1175/2057.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuligowski, R. J., and A. P. Barros, 1998a: Experiments in short-term precipitation forecasting using artificial neural networks. Mon. Wea. Rev., 126, 470482, doi:10.1175/1520-0493(1998)126<0470:EISTPF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuligowski, R. J., and A. P. Barros, 1998b: Localized precipitation forecasts from a numerical weather prediction model using artificial neural networks. Wea. Forecasting, 13, 11941204, doi:10.1175/1520-0434(1998)013<1194:LPFFAN>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kwok, T.-Y., and D. Y. Yeung, 1997: Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks, 8, 11311148, doi:10.1109/72.623214.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lachtermacher, G., and J. D. Fuller, 1994: Backpropagation in hydrological time series forecasting. Stochastic and Statistical Methods in Hydrology and Environmental Engineering, Water Science and Technology Library, Vol. 10/3, Springer, 229–242, doi:10.1007/978-94-017-3083-9_18.

    • Crossref
    • Export Citation
  • Levy, A. A. L., M. Jenkinson, W. Ingram, F. H. Lambert, C. Huntingford, and M. Allen, 2014a: Correcting precipitation feature location in general circulation models. J. Geophys. Res. Atmos., 119, 13 35013 369, doi:10.1002/2014JD022357.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Levy, A. A. L., M. Jenkinson, W. Ingram, F. H. Lambert, C. Huntingford, and M. Allen, 2014b: Increasing the detectability of external influence on precipitation by correcting feature location in GCMs. J. Geophys. Res. Atmos., 119, 12 46612 478, doi:10.1002/2014JD02235.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, H., J. Sheffield, and E. F. Wood, 2010: Bias correction of monthly precipitation and temperature fields from Intergovernmental Panel on Climate Change AR4 models using equidistant quantile matching. J. Geophys. Res., 115, D10101, doi:10.1029/2009JD012882.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, S. C., and D. Zheng, 2003: Applications of artificial neural networks to geosciences: Review and prospect (in Chinese). Adv. Earth Sci., 18, 6876.

    • Search Google Scholar
    • Export Citation
  • Liu, Z., C. Peng, W. Xiang, D. Tian, X. Deng, and M. Zhao, 2010: Application of artificial neural networks in global climate change and ecological research: An overview. Chin. Sci. Bull., 55, 38533863, doi:10.1007/s11434-010-4183-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maier, H. R., and G. C. Dandy, 1998: The effect of internal parameters and geometry on the performance of back-propagation neural networks: An empirical study. Environ. Modell. Software, 13, 193209, doi:10.1016/S1364-8152(98)00020-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maier, H. R., and G. C. Dandy, 2000: Neural networks for the prediction and forecasting of water resources variables: A review of modelling issues and applications. Environ. Modell. Software, 15, 101124, doi:10.1016/S1364-8152(99)00007-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maier, H. R., A. Jain, G. C. Dandy, and K. P. Sudheer, 2010: Methods used for the development of neural networks for the prediction of water resource variables in river systems: Current status and future directions. Environ. Modell. Software, 25, 891909, doi:10.1016/j.envsoft.2010.02.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Malhi, Y., and Coauthors, 2009: Exploring the likelihood and mechanism of a climate change–induced dieback of the Amazon rainforest. Proc. Natl. Acad. Sci. USA, 106, 20 61020 615, doi:10.1073/pnas.0804619106.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marzban, C., 2003: Neural networks for postprocessing model output: ARPS. Mon. Wea. Rev., 131, 11031111, doi:10.1175/1520-0493(2003)131<1103:NNFPMO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Master, T., 1993: Practical Neural Network Recipes in C++. Academic Press, 493 pp.

  • Maurer, E. P., and H. G. Hidalgo, 2008: Utility of daily vs. monthly large-scale climate data: An intercomparison of two statistical downscaling methods. Hydrol. Earth Syst. Sci., 12, 551563, doi:10.5194/hess-12-551-2008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • May, D. B., and M. Sivakumar, 2009: Prediction of urban stormwater quality using artificial neural networks. Environ. Modell. Software, 24, 296302, doi:10.1016/j.envsoft.2008.07.004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., H. Teng, and G. Branstator, 2006: Future changes of El Niño in two global coupled climate models. Climate Dyn., 26, 549566, doi:10.1007/s00382-005-0098-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moghim, S., 2015: Bias correction of global circulation model outputs using artificial neural networks. Ph.D. thesis, Dept. of Civil and Environmental Engineering, Georgia Institute of Technology, 278 pp. [Available online at http://hdl.handle.net/1853/55487.]

  • Moghim, S., S. L. McKnight, K. Zhang, A. M. Ebtehaj, R. G. Knox, R. L. Bras, P. R. Moorcroft, and J. Wang, 2017: Bias-corrected data sets of climate model outputs at uniform space–time resolution for land surface modelling over Amazonia. Int. J. Climatol., 37, 621636, doi:10.1002/joc.4728.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nehrkorn, T., R. N. Hoffman, C. Grassotti, and J.-F. Louis, 2003: Feature calibration and alignment to represent model forecast errors: Empirical regularization. Quart. J. Roy. Meteor. Soc., 129, 195218, doi:10.1256/qj.02.18.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Panofsky, H. A., and G. W. Brier, 1968: Some Application of Statistics to Meteorology. Pennsylvania State University Press, 224 pp.

  • Rocha, M., P. Cortez, and J. Neves, 2005: Simultaneous evolution of neural network topologies and weights for classification and regression. Computational Intelligence and Bioinspired Systems: IWANN 2005, J. Cabestany, A. Prieto, and F. Sandoval, Eds., Lecture Notes in Computer Science, Vol. 3512, Springer, 59–66, doi:10.1007/11494669_8.

    • Crossref
    • Export Citation
  • Roebber, P. J., S. L. Bruening, D. M. Schultz, and J. V. C. Jr, 2003: Improving snowfall forecasting by diagnosing snow density. Wea. Forecasting, 18, 264287, doi:10.1175/1520-0434(2003)018<0264:ISFBDS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rumelhart, D. E., 1995: Backpropagation: Theory, Architectures, and Applications. Y. Chauvin, 561 pp.

  • Rumelhart, D. E., G. E. Hintont, and R. J. Williams, 1986: Learning representations by back-propagation errors. Nature, 323, 533536, doi:10.1038/323533a0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schalkoff, R. J., 1990: Artificial Intelligence: An Engineering Approach. McGraw-Hill, 640 pp.

  • Schalkoff, R. J., 1997: Artificial Neural Network. McGraw-Hill, 422 pp.

  • Sheffield, J., G. Goteti, and E. F. Wood, 2006: Development of a 50-year high-resolution global dataset of meteorological forcings for land surface modeling. J. Climate, 19, 30883111, doi:10.1175/JCLI3790.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shin, S. I., Z. Liu, B. Otto-Bliesner, E. C. Brady, J. E. Kutzbach, and S. P. Harrison, 2003: A simulation of the last glacial maximum climate using the NCAR-CCSM. Climate Dyn., 20, 127151, doi:10.1007/s00382-002-0260-x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Takahashi, Y., 1993: Generalization and approximation capabilities of multi-layer networks. Neural Comput., 5, 132139, doi:10.1162/neco.1993.5.1.132.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tamura, S. I., and M. Tateishi, 1997: Capabilities of a four-layered feedforward neural network: Four layers versus three. IEEE Trans. Neural Networks, 8, 251255, doi:10.1109/72.557662.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Taormina, R., K. Chau, and R. Sethi, 2012: Artificial neural network simulation of hourly groundwater levels in a coastal aquifer system of the Venice lagoon. Eng. Appl. Artif. Intell., 25, 16701676, doi:10.1016/j.engappai.2012.02.009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2012: Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods. J. Hydrol., 456–457, 1229, doi:10.1016/j.jhydrol.2012.05.052.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • von Storch, H., 1999: On the use of “inflation” in statistical downscaling. J. Climate, 12, 35053506, doi:10.1175/1520-0442(1999)012<3505:OTUOII>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • White, H., 1990: Connectionist nonparameteric regression: Multi-layer feed forward networks can learn arbitrary mappings. Neural Networks, 3, 535549, doi:10.1016/0893-6080(90)90004-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, A. W., L. R. Leung, V. Sridhar, and D. P. Lettenmaier, 2004: Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Climatic Change, 62, 189216, doi:10.1023/B:CLIM.0000013685.99609.9e.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yuan, H., X. Gao, S. L. Mullen, S. Sorooshian, J. Du, and H.-M. H. Juang, 2007: Calibration of probabilistic quantitative precipitation forecasts with an artificial neural network. Wea. Forecasting, 22, 12871303, doi:10.1175/2007WAF2006114.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, F., and A. P. Georgakakos, 2012: Joint variable spatial downscaling. Climatic Change, 111, 945972, doi:10.1007/s10584-011-0167-9.

  • Zhang, K., and Coauthors, 2015: The fate of Amazonian ecosystems over the coming century arising from changes in climate, atmospheric CO2, and land use. Global Change Biol., 21, 2569–2587, doi:10.1111/gcb.12903.

    • Crossref
    • Export Citation
Save
  • Alexander, G. D., J. A. Weinman, and J. L. Schols, 1998: The use of digital warping of microwave integrated water vapor imagery to improve forecasts of marine extratropical cyclones. Mon. Wea. Rev., 126, 14691496, doi:10.1175/1520-0493(1998)126<1469:TUODWO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Alexander, G. D., J. A. Weinman, V. M. Karyampudi, W. S. Olson, and A. C. L. Lee, 1999: The effect of assimilating rain rates derived from satellites and lightning on forecasts of the 1993 superstorm. Mon. Wea. Rev., 127, 14331457, doi:10.1175/1520-0493(1999)127<1433:TEOARR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Aragao, L., Y. Malhi, N. Barbier, L. Anderson, S. Saatchi, and E. Shimabukuro, 2008: Interactions between rainfall, deforestation and fires during recent years in the Brazilian Amazonia. Philos. Trans. Roy. Soc. London, B363, 17791785, doi:10.1098/rstb.2007.0026.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • ASCE Task Committee, 2000: Artificial neural networks in hydrology. I: Preliminary concepts. J. Hydrol. Eng., 5, 115137, doi:10.1061/(ASCE)1084-0699(2000)5:2(115).

    • Search Google Scholar
    • Export Citation
  • Berg, A. A., J. S. Famiglietti, J. P. Walker, and P. R. Houser, 2003: Impact of bias correction to reanalysis products on simulations of North American soil moisture and hydrological fluxes. Geophys. Res. Lett., 108, 4490, doi:10.1029/2002JD003334.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blum, E. K., and L. K. Li, 1991: Approximation theory and feedforward networks. Neural Networks, 4, 511515, doi:10.1016/0893-6080(91)90047-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bonan, G. B., and S. Levis, 2006: Evaluating aspects of the Community Land and Atmosphere Models (CLM3 and CAM3) using a Dynamic Global Vegetation Model. J. Climate, 19, 22902301, doi:10.1175/JCLI3741.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Burger, G., 1996: Expanded downscaling for generating local weather scenarios. Climate Res., 7, 111128, doi:10.3354/cr007111.

  • Bush, M. B., M. R. Silman, C. McMichael, A. Restrepo-Correa, D. H. Urrego, A. Correa, and S. Saatchi, 2008: Fire, climate change and biodiversity in Amazonia: A late-Holocene perspective. Philos. Trans. Roy. Soc. London, B363, 17951802, doi:10.1098/rstb.2007.0014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Butler, R., 2006: Amazon destruction. Accessed 22 May 2017. [Available online at http://rainforests.mongabay.com/amazon/amazon_destruction.html.]

  • Cayan, D. R., E. P. Maurer, M. D. Dettinger, M. Tyree, and K. Hayhoe, 2008: Climate change scenarios for the California region. Climatic Change, 87 (Suppl.), 2142, doi:10.1007/s10584-007-9377-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chang, C.-Y., J. A. Carton, S. A. Grodsky, and S. Nigam, 2007: Seasonal climate of the tropical Atlantic sector in the NCAR Community Climate System model 3: Error structure and probable causes of errors. J. Climate, 20, 10531070, doi:10.1175/JCLI4047.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, J., F. P. Brissette, D. Chaumont, and M. Braun, 2013: Finding appropriate bias correction methods in downscaling precipitation for hydrologic impact studies over North America. Water Resour. Res., 49, 41874205, doi:10.1002/wrcr.20331.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Christensen, J. H., and Coauthors, 2007: Regional climate projections. Climate Change 2007: The Physical Science Basis, S. Solomon et al., Eds., Cambridge University Press, 847–940.

  • Cochrane, M. A., and C. P. Barber, 2009: Climate change, human land use and future fires in the Amazon. Global Change Biol., 15, 601612, doi:10.1111/j.1365-2486.2008.01786.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Collins, W. D., and Coauthors, 2006: The Community Climate System Model version 3 (CCSM3). J. Climate, 19, 21222143, doi:10.1175/JCLI3761.1.

  • Costa, M. H., and G. F. Pires, 2010: Effects of Amazon and central Brazil deforestation scenarios on the duration of the dry season in the arc of deforestation. Int. J. Climatol., 30, 19701979, doi:10.1002/joc.2048.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crane, R. G., and B. C. Hewitson, 1998: Doubled CO2 precipitation changes for the Susquehanna basin: Down-scaling from the GENESIS general circulation model. Int. J. Climatol., 18, 6576, doi:10.1002/(SICI)1097-0088(199801)18:1<65::AID-JOC222>3.0.CO;2-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dai, A., 2001a: Global precipitation and thunderstorm frequencies. Part I: Seasonal and interannual variations. J. Climate, 14, 10921111, doi:10.1175/1520-0442(2001)014<1092:GPATFP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dai, A., 2001b: Global precipitation and thunderstorm frequencies. Part II: Diurnal variations. J. Climate, 14, 11121128, doi:10.1175/1520-0442(2001)014<1112:GPATFP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dai, A., 2006: Precipitation characteristics in eighteen coupled climate models. J. Climate, 19, 46054630, doi:10.1175/JCLI3884.1.

  • Durbin, R., and D. E. Rumelhart, 1989: Product units: A computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput., 1, 133142, doi:10.1162/neco.1989.1.1.133.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Funahashi, K. I., 1989: On the approximation realization of continuous mapping by neural networks. Neural Networks, 2, 183192, doi:10.1016/0893-6080(89)90003-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Harris, I., P. D. Jones, T. J. Osborn, and D. H. Lister, 2014: Updated high-resolution grids of monthly climatic observations the CRU TS3.10 dataset. Int. J. Climatol., 34, 623642, doi:10.1002/joc.3711.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hassoun, M. H., 1995: Fundamentals of Artificial Neural Networks. MIT Press, 511 pp.

  • Hayhoe, K., and Coauthors, 2004: Emissions pathways, climate change, and impacts on California. Proc. Natl. Acad. Sci. USA, 101, 12 42212 427, doi:10.1073/pnas.0404500101.

    • Crossref