En-GARD: A Statistical Downscaling Framework to Produce and Test Large Ensembles of Climate Projections

Ethan D. Gutmann aNational Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Ethan D. Gutmann in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-4077-3430
,
Joseph. J. Hamman aNational Center for Atmospheric Research, Boulder, Colorado
bCarbonPlan, San Francisco, California

Search for other papers by Joseph. J. Hamman in
Current site
Google Scholar
PubMed
Close
,
Martyn P. Clark cColdwater Laboratory, Centre for Hydrology, University of Saskatchewan, Canmore, Alberta, Canada

Search for other papers by Martyn P. Clark in
Current site
Google Scholar
PubMed
Close
,
Trude Eidhammer aNational Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Trude Eidhammer in
Current site
Google Scholar
PubMed
Close
,
Andrew W. Wood aNational Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Andrew W. Wood in
Current site
Google Scholar
PubMed
Close
, and
Jeffrey R. Arnold dResponses to Climate Change Program, U.S. Army Corps of Engineers, Seattle, Washington

Search for other papers by Jeffrey R. Arnold in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Statistical processing of numerical model output has been a part of both weather forecasting and climate applications for decades. Statistical techniques are used to correct systematic biases in atmospheric model outputs and to represent local effects that are unresolved by the model, referred to as downscaling. Many downscaling techniques have been developed, and it has been difficult to systematically explore the implications of the individual decisions made in the development of downscaling methods. Here we describe a unified framework that enables the user to evaluate multiple decisions made in the methods used to statistically postprocess output from weather and climate models. The Ensemble Generalized Analog Regression Downscaling (En-GARD) method enables the user to select any number of input variables, predictors, mathematical transformations, and combinations for use in parametric or nonparametric downscaling approaches. En-GARD enables explicitly predicting both the probability of event occurrence and the event magnitude. Outputs from En-GARD include errors in model fit, enabling the production of an ensemble of projections through sampling of the probability distributions of each climate variable. We apply En-GARD to regional climate model simulations to evaluate the relative importance of different downscaling method choices on simulations of the current and future climate. We show that choice of predictor variables is the most important decision affecting downscaled future climate outputs, while having little impact on the fidelity of downscaled outcomes for current climate. We also show that weak statistical relationships prevent such approaches from predicting large changes in extreme events on a daily time scale.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Arnold’s current affiliation: Climate and Environmental Sciences Office, MITRE Corporation, McLean, Virginia.

Corresponding author: Ethan Gutmann, gutmann@ucar.edu

Abstract

Statistical processing of numerical model output has been a part of both weather forecasting and climate applications for decades. Statistical techniques are used to correct systematic biases in atmospheric model outputs and to represent local effects that are unresolved by the model, referred to as downscaling. Many downscaling techniques have been developed, and it has been difficult to systematically explore the implications of the individual decisions made in the development of downscaling methods. Here we describe a unified framework that enables the user to evaluate multiple decisions made in the methods used to statistically postprocess output from weather and climate models. The Ensemble Generalized Analog Regression Downscaling (En-GARD) method enables the user to select any number of input variables, predictors, mathematical transformations, and combinations for use in parametric or nonparametric downscaling approaches. En-GARD enables explicitly predicting both the probability of event occurrence and the event magnitude. Outputs from En-GARD include errors in model fit, enabling the production of an ensemble of projections through sampling of the probability distributions of each climate variable. We apply En-GARD to regional climate model simulations to evaluate the relative importance of different downscaling method choices on simulations of the current and future climate. We show that choice of predictor variables is the most important decision affecting downscaled future climate outputs, while having little impact on the fidelity of downscaled outcomes for current climate. We also show that weak statistical relationships prevent such approaches from predicting large changes in extreme events on a daily time scale.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Arnold’s current affiliation: Climate and Environmental Sciences Office, MITRE Corporation, McLean, Virginia.

Corresponding author: Ethan Gutmann, gutmann@ucar.edu

1. Introduction

Bias correction and statistical downscaling are terms used to describe empirical postprocessing of atmospheric output from regional or global climate models (GCMs) or from numerical weather prediction (NWP) models. The application of these methods removes model biases and increases the spatial and/or temporal resolution of the GCM and NWP models (Benestad 2004; Wilby 1998). Otherwise, such systematic model errors make it difficult to use raw model projections for some real-world applications, such as regional or local hydrologic modeling. The goal in using statistical downscaling methods for climate applications is to extract the relevant change signals from the GCM while correcting the most prominent deficiencies.

Representing changes in regional climate has been identified as one of the biggest challenges facing climate science applications (Schiermeier 2010; Kerr 2011). Statistical downscaling, sometimes combined with regional climate modeling (Maraun et al. 2010; Walton et al. 2015), can add substantial value for climate applications. Most notably, downscaling can improve the representation of fine-scale climate variability resulting from the presence of features such as mountains or water bodies. Representing the effect of mountains on precipitation and temperature is important for water resource applications to better represent mountain snowpack and evaluate changes in timing of runoff in a future climate. Perhaps because of its widespread use, bias correction and statistical downscaling has received a substantial amount of criticism, with some authors even asking, “Should we apply bias correction?” (Ehret et al. 2012) and “Is bias correction […] possible for nonstationary conditions?” (Teutschbein and Seibert 2013). These are reasonable questions, particularly given the nature of many bias correction applications, where statistical approaches are often applied with little connection to the physical processes involved.

It is generally accepted that some intermediate step is required to make use of climate model output for applications, though the validity of statistical or dynamical downscaling approaches has not been comprehensively established. Previous work has suggested some value from regional climate modeling due to the resolution of processes such as orographic precipitation (Rasmussen et al. 2014), lake effect precipitation (Hall 2014) or the snow albedo feedback effect (Letcher and Minder 2015; Hall 2014), although the fidelity of change signals produced by regional climate modeling has not been verified. The fidelity of statistical downscaling for representing regional climate change has, likewise, seldom been validated. Maraun et al. (2017) called for the inclusion of more physical process representations in downscaling to make the change signal more robust and to reduce statistical artifacts (Maraun et al. 2017; Gutmann et al. 2014; Walton et al. 2015). To improve the physical realism of downscaling methods without the computational costs of a full regional climate model, Gutmann et al. (2016) developed a quasi-dynamical approach; however, process-motivated statistical downscaling strategies have not been studied in detail.

Many statistical methods have been proposed and implemented for a wide range of regional climate applications. Existing approaches span a broad range of methodological complexity. Wilby and Wigley (1997) classified downscaling methods based on the specific algorithm employed, considering regression methods, weather type methods, and weather generator approaches. Rummukainen (1997) proposed a classification based more on the relationship between models and observations, considering perfect prognosis (PP) and model output statistics (MOS) approaches. PP assumes that a model or observational dataset exists with a perfect representation of coarse-scale predictors (e.g., atmospheric outputs from an NWP model) that can be related in a time-synchronous statistical relationship to observations. In PP, typically a reanalysis dataset is taken as the coarse model to develop the statistical relationship, which is then applied to the climate or weather model of interest (e.g., Clark and Hay 2004). In the climate downscaling context, MOS has been described as relating the probability density functions of outputs to those of observations. As such, statistical relationships are estimated from asynchronous properties of the climate system. Examples include the asynchronous regional regression model technique of Stoner et al. (2013) or the direct quantile mapping technique (Panofsky and Brier 1968; Wood et al. 2002). In the numerical weather prediction context (Glahn and Lowry 1972), MOS approaches also require time synchroneity between model output (e.g., reforecasts) and observations.

The most widely used methods in water resource applications in North America take the approach of working directly with the GCM surface variables of interest, e.g., using GCM precipitation to produce downscaled precipitation. The simplest methods, such as the delta method (Hay et al. 2000; Gleick 1986), scale or shift GCM variables to correct for biases, or instead, apply the GCM change signal to observed weather. The more complex quantile mapping step in the bias correction and spatial disaggregation (BCSD) technique (Wood et al. 2004) adjusts individual quantiles of the distribution of the climate variable of interest, and hence corrects the entire probability distribution. Analog methods, such as the bias-corrected constructed analogs technique (Hidalgo et al. 2008) or localized constructed analogs (LOCA) technique (Pierce et al. 2014) use spatial patterns of a climate variable (e.g., precipitation) as a proxy for weather type, to relate GCM simulated weather to observations. The above are single variable downscaling methods, meaning that these methods relate a single model variable to an observed variable. More recently, multivariable statistical correction methods have been developed to refine the distribution correction algorithm to work with the covariance between variables as well (Cannon 2018; Mehrotra and Sharma 2016; Vrac 2018; Guo et al. 2019).

Other multivariable downscaling methods have been developed that use a wider set of variables from GCMs and NWP models. These methods may use upper-air circulation variables from the GCM such as 500-hPa winds and water vapor to predict a downscaled precipitation field. Examples of these methods are the Statistical Downscaling Model (Wilby et al. 2002) and the weighted linear regression model of Clark and Hay (2004), which was developed for weather forecasts. It is common for such statistical downscaling methods to leave out the GCM variable of interest, e.g., precipitation, when it is not considered to be well simulated in the first place. For example, in Hawaii, the absence of a high island in GCMs has meant that GCM precipitation has little relationship to the processes that produce precipitation on the islands. For this reason, Timm et al. (2015) based downscaling entirely on regional circulation patterns. Similarly, Langousis and Kaleris (2014) describe a statistical downscaling method based only on upper-air predictors because GCM precipitation may not be a reliable predictor.

There have been numerous reviews of downscaling approaches so that end users can understand the tradeoffs in each method. Teutschbein and Seibert (2012) provided one such review of a suite of bias correction methods applied to regional climate models showing that while all methods improved the performance of hydrologic models relative to the uncorrected models, the tails of the distribution were most variable between approaches. Gutmann et al. (2014) provided a comparison of four single-variable methods that are used in the water resources sector in the United States, finding that while most methods could reproduce mean values, their skill range in extreme event statistics was wider, particularly as a function of spatial scale. Other studies have compared statistical and dynamical downscaling methods (Wood et al. 2004; Mearns et al. 1999; Gutmann et al. 2012; Takayabu et al. 2016).

In this paper, we describe a new framework, the Ensemble Generalized Analog Regression Downscaling (En-GARD) package, that is designed to enable a more comprehensive and controlled analysis of statistical downscaling methods. En-GARD builds off past work (Clark and Hay 2004; Clark and Slater 2006; Gangopadhyay et al. 2005) to provide a downscaling method capable of incorporating circulation predictors and generating gridded downscaled meteorological datasets with the spatial and temporal variability features needed for applications in hydrology, ecology and other disciplines (Gutmann et al. 2014; Mizukami et al. 2016). En-GARD enables evaluating the individual decisions made when developing downscaling methods, such as “What is the effect of bias correcting the inputs to the downscaling method?”, “What is the effect of including multiple variables from the GCM or NWP model?”, and “What are the advantages of analog-based or regression-based approaches?”, among many other questions.

2. Datasets

Statistical downscaling develops empirical relationships between a coarse-scale historical period climate model simulation and a reference historical observational dataset. These empirical relationships are then applied to model output for other time periods. In this paper, we trained the statistical downscaling using a gridded observation dataset as the reference for reanalysis-based simulations. We then apply the trained downscaling to both reanalysis and GCM-based simulations.

a. Gridded meteorological observations

We use the observed gridded meteorologic dataset from Maurer et al. (2002). This dataset provides a multidecadal record of daily precipitation and temperature on a ⅛° regular latitude–longitude grid over the conterminous United States (Fig. 1). It was created by interpolating station observations of daily precipitation and temperature and adjusting for topography. While this is a relatively low-resolution dataset when compared with other available surface forcing datasets such as the Livneh et al. (2013) or Daymet (Thornton et al. 1997) products, the spatial resolution of the Maurer dataset does not affect the methodological tests illustrated here. Future work could explore the use of a dataset such as the ensemble products of Newman et al. (2015) and Tang et al. (2021) to incorporate uncertainty in the observations.

Fig. 1.
Fig. 1.

Domain topography as represented in two different climate models [(top left) CanESM2 and (top right) MRI-CGCM3], along with (bottom left) the WRF 50-km regional model grid, and (bottom right) the En-GARD ⅛° grid.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

b. Global atmospheric model data

The training datasets used here were derived from two sources. To generate atmospheric fields that were synchronous with the observed precipitation and temperature, the ERA-Interim (ERAi) global reanalysis (Dee et al. 2011) developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) was used for the period 1979–2015. To generate atmospheric fields suitable for future projections, model outputs from a selection of six models from the phase 5 of the Coupled Model Intercomparison Project (CMIP5) archive were used (Table 1) for the period 1950–2100. More details on the relationship between these datasets and the En-GARD training and prediction steps are provided in the methods section.

Table 1

CMIP5 climate models used in this study.

Table 1

c. Weather Research and Forecasting Model

To homogenize the properties of atmospheric data used for training with those used for projection, e.g., grid spacing and mesoscale physics parameterizations, the model data from both reanalysis and GCMs were used as boundary conditions for multidecadal simulations of the Weather Research and Forecasting (WRF) Model (Skamarock et al. 2008). The output from WRF is used as input to En-GARD. WRF simulations were performed for all datasets using a consistent 50-km lambert conformal conic grid (Fig. 1). This is similar to the resolution of regional climate model simulations in the North American Regional Climate Change Assessment Project (NARCCAP) (Mearns et al. 2013), the Coordinated Regional Downscaling Experiment (CORDEX) (Giorgi and Gutowski 2015), and the Coupled Model Intercomparison Project (CMIP6) High-Resolution Model Intercomparison Project (Haarsma et al. 2016). WRF simulations are not a necessary step for En-GARD, but it decreases potentially different statistical relationships between variables in the training and application datasets. More research is needed into the impact of this step in the future. We extracted daily precipitation, zonal and meridional winds, specific humidity, and near-surface daily mean, minimum, and maximum air temperature to use as inputs to En-GARD. WRF simulations were performed with WRF, version 3.7, and all runs were performed with the Noah land surface model (Chen and Dudhia 2001), WSM6 microphysics (Hong and Lim 2006), Grell–Freitas cumulus parameterization (Grell and Freitas 2014), YSU boundary layer scheme (Hong et al. 2006), and RRTMG radiation model (Iacono et al. 2008). The boundary conditions from global climate models were bias corrected to match the climatology of ERA-Interim reanalysis in the historical period (Bruyère et al. 2013); this correction was performed on all variables and grid cells independently and was computed based on time average biases.

3. En-GARD method

En-GARD provides a flexible framework to construct ensembles of statistical downscaling models under a common software package. En-GARD is a Python-based redevelopment and expansion of the local regression concepts and methods used by Hay and Clark (2003) and Clark and Hay (2004) to postprocess short-range weather forecasts, and spatial field statistics from Clark and Slater (2006), which are also part of the Generalized Meteorological Ensemble Tool (Newman et al. 2015). Most En-GARD applications to date have focused on downscaling for climate applications, though it is also suitable for medium-range weather or seasonal forecasting. Its capabilities have been extended for efficient real-time forecasts as part of the streamflow forecasting system described in Mendoza et al. (2017) and it has been used for seasonal drought forecasts (Zamora et al. 2021). While the code has been available, this paper represents the first documentation of the approach in the scientific literature. En-GARD offers options to implement different statistical algorithms, together with capabilities to interpolate between grids, bias correct, transform, and normalize variables. En-GARD provides an empirical quantile mapping functionality as is commonly used for bias correction (Panofsky and Brier 1968).

a. Core algorithms

The primary En-GARD algorithms include a regression model, a logistic regression model, and an analog selection process. The most general formulation of such a model is the locally weighted regression, in which the weighting of points used in statistical models are defined to represent their expected value. Modifying the weighting in local regression models permits variations from pure regression (even weights) to pure analog (nonzero weights for a single point, zero slope to the regression). In En-GARD, the use of analog members in the regression corresponds to a weighted regression in which selected analogs have a weight of 1, and all others have a weight of 0.

The implementation of the regression model in En-GARD requires time-synchronous training and observation data, enabling the direct application of a linear model:
y(C,X)=c0+c1x1++cpxp+e,
where we designate the vector C = c1,…, cp as the coefficients and c0 as intercept. The vector X = x1, …, xp represents a collection of independent predictor variables, y represents the predictand, and e represents the residuals of the model. For example, x1, x2, and x3 could be modeled precipitation, 500-hPa meridional winds, and 500-hPa zonal winds, and y could be the observed precipitation amount. Not included in this equation is y^, representing the GARD mean precipitation estimation prior to inclusion of error terms. All xp values are interpolated to the observed grid cell using a bilinear interpolation. A single-day example showing maps of the predicted mean precipitation, error, probability of occurrence (see section 3b), spatially correlated random field (see section 3c), and final predicted map of precipitation is shown in a flowchart in Fig. 2.
Fig. 2.
Fig. 2.

Flowchart for a single-day precipitation prediction from En-GARD: (top center) GARD mean prediction prior to inclusion of stochastic variability, (top middle) residuals, (top bottom) probability of precipitation, (top right) SCRFs, and (bottom) final En-GARD predictions.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

En-GARD extends the linear model by providing the option to use a subset of analog days from the training period for each day in the time series, and hence train the regression model independently for each day in the time series. This subset of analog days is selected by finding days in the training period that are analogous to the day to be predicted, similar to a regression tree (Breiman et al. 1984), a locally weighted regression (Cleveland 1979), or nearest neighbor semiparametric regression (Altman 1992). For example, to downscale a day in a climate model with heavy precipitation when winds are from the southeast, En-GARD would select only analog days from the reanalysis training period with relatively heavy precipitation and winds from the southeast. These analog days would then be used to fit a regression model, and that model would be applied to the day being downscaled from the climate model. The number of analog days to use can be specified at run time, alternatively the distance in a Euclidean z-score space can be used to select all analog days within a specified tolerance.

En-GARD further provides the capability to use a purely nonparametric analog approach to downscaling. In this case, the predicted amount, probability of occurrence, and errors are derived from the mean and variance of the selected analogs. This approach is directly akin to analog approaches such as the k-nearest neighbor (KNN) technique (Gangopadhyay et al. 2005). The averages can be weighted based on the distance to each analog member in a Euclidean z-score space. We refer to these abstractions throughout and define three modes of En-GARD: pure regression (PR), pure analog (PA), and analog–regression (AR). PR computes a regression between predictors and predictands for each grid cell, this regression is applied to all time steps to be downscaled. PA finds a collection of historical analog days for every grid cell and every time step independently and uses the mean and standard deviation of the corresponding observations for those analog days with no regression model. AR finds past analogs and fits a regression model to those analogs; this requires a new regression model to be computed for each grid cell and every time step.

b. Threshold-dependent predictions

It is often important in statistical applications to predict variables that have threshold-dependent qualities. For example, predicting the amount of precipitation may require the algorithm to separate days with and without any precipitation (0-mm threshold). Similarly, predictions of the occurrence of extreme precipitation events (e.g., 100 mm day−1 threshold), or the occurrence of frost days (below 273.15-K threshold) may be desirable. For these cases, En-GARD computes the probability of exceeding a user-specified threshold based on either an analog approach, a logistic regression, or a hybrid analog logistic regression approach. For the analog approach, the fraction of analog days that exceed the threshold is used as the predicted probability. For the regression approach, a logistic regression model is used:
p^(C,X)={1+exp[(c0+ c1x1++ cpxp)]}1,
where p^ is the predicted probability of exceedance and C and X are defined as in Eq. (1). For the hybrid approach, a subset of analog days is chosen as input to a logistic regression as described for the mean quantity prediction above.

To predict the magnitude by which a threshold will be exceeded—for example, an amount of precipitation—En-GARD separates the training data into days that exceed the threshold and days that do not. En-GARD then computes the magnitude based only on the days that exceed the threshold. Threshold-dependent prediction may also be of interest for temperature in a heat wave or frost damage context, or wind speed in wind-energy applications.

c. Variability and ensembles

Some statistical methods produce probabilistic estimates of the climate variables of interest at each grid cell and time step (e.g., Clark and Slater 2006). To include these capabilities, either the residual error term from the regression or the standard deviation of the analogs is saved to improve estimates of temporal variability and provide estimates of uncertainty. En-GARD uses the error term to restore realistic variance by including a stochastic term dependent on that error in the generation of the final product. The error term also permits the generation of ensembles of realizations, each of which is consistent with the input data.

For many regional applications, the spatial and temporal structure of this variability is also important to represent. For example, the spatial structure of precipitation can have important implications for flood predictions. En-GARD uses spatiotemporally correlated random fields (SCRF) to sample from the probability distributions estimated for each climate variable, grid cell, and time step. The SCRFs are conditioned on spatiotemporal autocorrelations precomputed from a meteorological station dataset. Temporal autocorrelation is based on the observed 1-day lag correlation. Spatial autocorrelations are modeled using an exponential function, with a length scale based on observed station to station spatial autocorrelation and a nested approach to improve computational efficiency. For more details see Clark and Slater (2006) or Newman et al. (2015). These SCRFs are normally distributed with a mean of zero and a standard deviation of one. The mean prediction from the regression y^ are perturbed by multiplying the residual from the regression e by the SCRF:
Y=y^+SCRF×e.
When a threshold-dependent behavior is included, the SCRF is used in both the sampling of threshold exceedance and the prediction of the magnitude in a two-step process. First, the SCRF value is converted to a uniform distribution, and days for which this number is greater than the probability of nonexceedance are considered further. If exceeded, the random number is rescaled by the predicted probability of exceedance, converted back to a normal distribution, and then used as above to predict the magnitude. Conceptually, this is the parametric equivalent of sampling from the distribution of expected values.

d. Methodological tests

Here we test the En-GARD method by applying it with three different methods, three different variable sets, and six different GCMs. In all cases, En-GARD is trained with the gridded observation dataset and the WRF–ERAi data from the 30-yr period 1980–2009 and applied to a 30-yr historical period (1970–99) and future period (2065–94). In all test cases, the same time series of spatially correlated random fields are used to sample from the expected variability to reduce one source of variation when comparing across methodologies. The three primary predictor variable sets used for precipitation are 1) precipitation alone, 2) precipitation and upper-level zonal (V) and meridional (U) winds, and 3) upper-level winds U and V and specific humidity Q. For temperature we test the following predictor variables: 1) surface air temperature T alone; 2) temperature and upper-level winds U and V; and 3) temperature, winds U and V, and precipitation. We provide analysis with additional variables in the online supplemental material for reference. Although many statistical downscaling comparisons have focused on the method applied to a single predictor set (Gutmann et al. 2014; Dixon et al. 2016; Lanzante et al. 2018), other studies have highlighted the importance of predictor set (Crawford et al. 2007; González-Rojí et al. 2019); thus, a comparison evaluating both the predictor selection and the estimation algorithm is a valuable next step.

e. Metrics

In evaluating climate downscaling methods, there are several key metrics to assess. First and foremost, climate downscaling methods must predict the expected mean and variability observed in current climate. To evaluate this, we compare the mean annual precipitation and temperature modeled by En-GARD with the observed values. Because there is a stochastic component in both En-GARD and the observation dataset used, we evaluate the bias in average values with respect to the variation across ensemble members in the Newman et al. (2015) observed dataset. To illustrate the predicted variance, we also present the empirical probability distribution function (PDF) of precipitation magnitudes in En-GARD in comparison with the observations. Second, and perhaps more important, it is valuable to examine the relative impact of methodological choices on the projected changes in local climate. Here we examine the projected changes in mean annual temperature and precipitation across all models and methods, as well as the changes in the number of extreme precipitation events. Here extreme precipitation is defined as the mean of the 30-yr time series of annual maximum daily amounts.

Methods are evaluated for both the current climate (sections 4a4c) to evaluate how well the methods reproduce the period on which they have been trained and the future climate (section 4d) to evaluate the impacts that methodological choices have on climate change projections.

4. Results

a. Current climate analysis

We first analyze the depiction of current climate in En-GARD output for mean annual precipitation and mean annual temperature. Statistics are computed over a 30-yr period (1970–99) and compared with observations from this same period. In all cases, En-GARD is trained on WRF–ERAi data. In the cases in which En-GARD is applied to downscale ERAi itself, a leave-one-out cross-validation approach is followed for analog based methods: the day being downscaled is excluded from the selection of analog days. For the PR method, it is expected that including the single point being downscaled in the collection of over 10 000 points will not have a large impact on the fitted model. En-GARD is applied to WRF–GCM data for the same historical time period, with the WRF–ERAi dataset used for training. Because the GCMs are free-running historical climate simulations, there is no expectation that day-to-day weather sequences match. Only the climate statistics for the period are expected to be reproduced, for example, mean annual precipitation and temperature, probability distribution of precipitation magnitudes, mean annual maximum, or wet day fraction. Unless otherwise specified, En-GARD results are derived using the AR hybrid algorithm; precipitation is predicted on the basis of precipitation, U, and V input variables; and temperature is predicted on the basis of temperature, U, and V inputs.

1) Mean precipitation

The mean annual precipitation amount predicted by En-GARD for a single GCM is presented in Fig. 3, along with the coarse model predicted and the observed mean annual precipitation. This illustrates the problem downscaling methods are designed to address, with significant fine-scale details missing and large biases present in the global model. En-GARD captures the spatial details depicted in the observations, locally with respect to elevation, and regionally in the Pacific Northwest relative to the desert Southwest.

Fig. 3.
Fig. 3.

Mean annual (top) precipitation and (bottom) temperature from the (left) MIROC5 global model, (center) observations, and (right) En-GARD.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

The fractional biases between the En-GARD predicted mean annual precipitation and the observed mean annual precipitation are presented in Fig. 4 for the AR, PR, and PA methods when applied only to precipitation p; to precipitation, U, and V winds (label puv); and to U and V winds and specific humidity (label uvq). This figure shows small but consistent biases as well as more subtle variability between methods. A consistent dry bias appears particularly in eastern Montana and along the Texas coast, and a consistent wet bias appears in California. However, we should note that these biases are typically small (under 10%) and are not unexpected because En-GARD does not mathematically force the output to be unbiased.

Fig. 4.
Fig. 4.

Fractional biases in En-GARD precipitation with respect to the observations averaged across the six global climate models for different (top),(middle),(bottom) methods and (left),(center),(right) predictors: label p is precipitation; labels u and v are the U and V wind components, respectively; and label q is specific humidity.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

The biases often appear more pronounced in specific locations across methods. Testing with multiple SCRFs suggests that the use of an identical stochastic component across methods is not the primary cause of this. It could be due to out-of-phase internal variability in the climate models relative to the historical climate, or to statistical properties of precipitation in these regions. To put the statistical properties of these biases in the context of the uncertainty in the observations, we show the histogram of biases across the domain from En-GARD and between two ensemble members from a statistically reliable ensemble of gridded observations (Newman et al. 2015) (Fig. 5). The standard deviation of biases in En-GARD (35 mm yr−1) is larger than the variability within the observations (23 mm yr−1). This level of bias is consistent with the biases observed in other statistical downscaling approaches (Gutmann et al. 2014).

Fig. 5.
Fig. 5.

(left) Histogram comparing biases in En-GARD (orange) and differences between the mean annual precipitation in two members of the Newman et al. (2015) ensemble dataset (black). (right) Histogram of fractional biases in different methods (solid, dashed, and dotted lines) and predictors (colors).

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

We note several consistent patterns when looking at the statistics of biases across different En-GARD methods and variable sets (Fig. 5). The methods based on the uvq variable set (green lines) have slightly larger (more negative) biases, although the PA method applied to the puv variable set is similar. The PA method (dotted lines) has consistently larger (negative) biases, and the PR method (solid lines) has the smallest biases, as do the methods based only on precipitation as a predictor.

2) Mean temperature

As with precipitation, the mean annual temperature from En-GARD, the observations, and the coarse-resolution model are shown in Fig. 3. En-GARD adds the observed spatial variability to the data such that the mean values are visually indistinguishable from the observations in comparison with the background. The temperature biases when averaged across GCMs for each methodological approach are small (<0.3 K) in most cases and exhibit consistent spatial patterns across methods (Fig. 6), though with varying amplitudes. For instance, in the PA puvt case, the bias pattern is similar to that of other methods such as PR temperature but is warmer.

Fig. 6.
Fig. 6.

Bias in mean annual temperature in En-GARD with respect to the observations averaged across global climate models for different (top),(middle),(bottom) methods and (left),(center),(right) predictors: label t is temperature; label p is precipitation; and labels u and v are the U and V wind components, respectively.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

In general, the biases in mean annual temperature are small across all methods; however, one obvious difference stands out. When using the PA method with more than just temperature as input, En-GARD’s predictions are warmer than the average, and this anomaly increases when more variables are used as input. Biases are larger when using the puvt variable set than when using only uvt or temperature alone (Figs. 6, 7).

b. Probability distribution function

En-GARD reasonably represents the full PDF of the time series of daily precipitation amounts in most configurations. Figure 8 shows the PDF of daily precipitation amounts over the domain when applied to WRF–ERAi for the period 1980–2009 for all methods. The precipitation is well represented in all methods with all precipitation intensities below 30–50 mm day−1. For the more extreme daily precipitation totals, the PR method underestimates the number of events that should occur, whereas the AR and PA methods both match the observed distribution well, although with slightly less frequent extreme precipitation than observed, particularly when applied only to precipitation as a predictor (blue lines.)

c. Time series variability

To examine the degree to which En-GARD is able to adequately reproduce year to year variability in precipitation, we compare En-GARD predictions when applied to the WRF-ERAi data for the period 1980–2009. To quantify how well different methodological approaches represent this variability, we compute the correlation coefficient squared r2 between the predicted annual precipitation and the observed annual precipitation. Results are shown when averaged over the Pacific Northwest (from 42.06° to 47.69°N and from −123.43° to −112.19°E) in Fig. 9. En-GARD is generally able to predict the wet years and dry years, but there are large differences in the correlation coefficient across methods and predictors. The AR method has the highest correlation coefficients (0.44–0.76), followed by the pure analog (0.39–0.71) and, last, the pure regression (0.35–0.69). The combination of precipitation, U, and V had the highest correlation coefficients (0.69–0.76) followed by precipitation alone (0.58–0.65) and the combination of U, V, and specific humidity (0.35–0.44). The selection of predictors has a larger impact than the selection of method. No methods were able to reproduce the wettest years in the period (1980–83, 1995, 1996, and 1998), and this may indicate that the reanalysis data do not represent aspects of these years well, or that extreme events, and thus the stochastic component, play a more significant role in those years. Testing with different SCRFs (Fig. S6 in the online supplemental material) showed that other random draws came closer, as did the uncorrected WRF output, but no configuration was able to reproduce these years. The inclusion of additional low-frequency predictors, for example, monthly sea surface temperatures, might provide some improvement in future work.

d. Future changes

We analyze the differences in climate change signals predicted by En-GARD as a function of method and predictor choices, as well as of the driving climate model. In all cases, changes are calculated by subtracting values computed over the period 1970–2000 from the period 2065–95, corresponding to a change projected 95 years in the future. We used GCM projections from the representative concentration pathway (RCP) 8.5 scenario.

1) Mean changes in precipitation

Changes in precipitation are presented for all six GCMs in Fig. 10. Substantial variability exists across all GCMs, with most predicting drying in the Pacific Northwest mountains and increases in precipitation into the interior of the Intermountain West and in the Northeast. The central United States has a mixed signal across GCMs with some tendency toward drying. Note that this is a small subset of GCMs, and a single WRF configuration. As a result, the projections presented here should not be considered a fully representative of the variations in projections of climate change, since that is not the objective of this paper.

Changes in precipitation are presented across downscaling methods and predictors in Fig. 11 as averaged across all GCMs. The differences across methods (within a column) are small relative to differences across predictors (within a row), with the largest differences between the U, V, and q predictor set (right column) and all others. The two columns that used precipitation as one of the inputs are very similar, with the column that includes upper-level winds projecting more drying in the Pacific Northwest. In contrast, the column based purely on water vapor, U, and V predicts significant increases in precipitation across most of the domain, with decreases limited to the mountain peaks in the western United States, particularly in the northwest. The area with greatest agreement between all predictors is along the west coast, with all methods and predictors projecting drying in this region, though that varies more across GCMs.

2) Mean changes in temperature

Change in mean annual temperature averaged across GCMs is shown in Fig. 12. For changes in temperature, the algorithm used appears to be as important as the choice of predictors. The differences between the pure regression, analog–regression, and pure analog are at least as large as the differences between using different variables subsets. In particular, the PA method predicts less increase in temperature when more than one predictor variable is used, and substantially lower temperatures when precipitation is used with U, V, and temperature; however, the PR and AR methods do show the same effect. Note that in this example temperature is used as one of the input variables in all cases, which is fairly common in climate downscaling. When a regression is used (top and middle rows) En-GARD is less sensitive to the choice of input variables, because the regression slope is controlled by the variable(s) with the most predictive power, in this case temperature. The changes in temperature are also likely to be larger when including a regression model in En-GARD, because it is then able to extrapolate outside of the range of past temperatures, whereas the PA method is only able to reproduce historical high temperatures.

3) Extreme changes in precipitation

The changes in extreme precipitation in En-GARD are significantly smaller than in other statistically downscaled datasets or the original WRF data in both frequency and magnitude. To illustrate the frequency of extreme event occurrences, we evaluate the number of grid cells in the domain that meet or exceed the observed mean annual maximum precipitation at any time in each 5-yr window (Fig. 13) with the distribution in the boxplot taken from across the climate models. While we are only showing this for the AR approach applied to precipitation, U, and V, the results are consistent across all En-GARD instances tested here.

The perfect prognosis approach used here in En-GARD is hindered by variability in the most extreme event occurrences. Because perfect prognosis relies on a one-to-one correspondence between observed and modeled events, any chaotic variability that disrupts the relationship will decrease the ability to represent such events. For common weather systems, this variability is averaged over many events, though it may slightly decrease the magnitude of the predicted change signal. However, the most extreme events may not be deterministically predictable from large-scale fields. As a result, extreme events are primarily represented by En-GARD stochastically, but the inability to predict them deterministically means that changes in the frequency of these events are not currently represented. End users must be extremely careful when using extreme precipitation statistically downscaled with such methods. Conditioning the SCRFs on the climate model precipitation could mitigate this, though care would need to be taken to prevent the loss of the desired variability. In contrast, methods such as BCSD or LOCA assume that local precipitation scales proportionately with the large-scale precipitation. While this is not always the case in reality, this determinism permits such methods to simulate changes in extreme events if those changes are found at the climate model scale. However, other statistical artifacts are evident in this approach. LOCA imposes limits in the historical period to prevent extreme events from exceeding the observed maximum in the historical period. When this limitation is lifted in the “future” (2005) there is an artificial increase in the number of extreme events (Fig. 13).

e. Variance by method

We evaluate the variability of the climate change signal predicted as a function of climate model, downscaling algorithm, and predictor selection. Figure 14 shows the differences in the standard deviation of the projected change in mean annual precipitation. For each methodological decision (e.g., climate model selection) the climate change signal is averaged across all other methodological choices (e.g., the choice of downscaling algorithm and the choice of predictor variables). The standard deviation of change signals is then computed across climate models. To show the regions for which one choice leads to greater variability than another, we plot the difference between the standard deviations for illustration. Figure 14 shows that in this case, the selection of predictors creates more variance than the details of the method (Fig. 14, left), and more variance than the choice of GCM everywhere except the west coast, where the GCM choice creates the most variance (Fig. 14, center). The variance across GCMs is greater than the variance across methodological choices everywhere.

5. Discussion

The results presented above have illustrated how methodological choices in downscaling methods affect regional climate projections. In particular, it is evident that most of the methodological choices examined are capable of representing the mean precipitation and temperature with biases on the same order of magnitude as the uncertainty in the observations themselves (Figs. 37). The only exception to this finding appears to be the PA approach when used with multiple predictors as input. However, this difference is likely attributable to differences in climate between the training (1980–2009) and historical verification period (1970–99), where the selected analogs are skewed slightly toward the climatological mean for the training period because the change in temperature between these periods is greater than any changes in wind or precipitation. The PA method likely performs poorly because it does not selectively weight different variables. The differences in climate between training and historical verification period also lead to some differences seen in the precipitation biases. More generally, the selection of the period of record for use in statistical training is one of many additional decisions that affect downscaling methods, not all of which are reviewed here.

Fig. 7.
Fig. 7.

Histogram of temperature biases in different methods (solid, dashed, and dotted lines) and predictors (colors).

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

In addition, most downscaling algorithms and predictor sets can represent the temporal probability distribution of precipitation values (Fig. 8). The exception is pure regression. Precipitation is often not as well correlated in a direct time step by time step comparison between models and observations because of its somewhat chaotic nature. This poor correlation leads to a flatter slope in the regression relationships, and as a result, the PR algorithm does not produce larger values of precipitation. This is true across all combinations of predictors.

Fig. 8.
Fig. 8.

Probability distribution function of En-GARD with different predictors (colors) and methods (line styles) along with the observed probability distribution function (black).

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

The analysis of temporal performance presented in Fig. 9 provides some insight into the ability of different methods to represent variability on time scales relevant for water resources management (i.e., annual versus daily). This is also a time scale over which the stochastic component of En-GARD averages to close to zero, making direct comparisons meaningful. While we have only shown this for one region of the United States, the relative skill between methods and predictors is the same when looking at other regions (the Southwest, Southeast, and northern Midwest). In this analysis, the choice of predictor variables had the greatest effect, with the combination of precipitation, U, and V having the highest correlations with observed precipitation, and the combination of specific humidity, U, and V having the weakest correlation. The choice of downscaling algorithm was not as important, but the AR approach had the most skill. None of the methods were able to represent the wettest years in the record, and it remains to be investigated whether this is due to a deficiency in En-GARD, ERAi, or WRF. Reanalysis products such as ERAi are known to have nonstationarities as a result of changes in the input observations, and the depiction of more extreme periods may be more difficult in such a low-resolution global model. While newer reanalysis products such as ERA-5 (Hersbach et al. 2020) have addressed the issues associated with inhomogeneity in observations, the higher spatial resolution of ERA-5 may make it a poor proxy for global climate models. Future research should investigate the value of different reanalysis products for use as training data.

Fig. 9.
Fig. 9.

Annual time series of observed (thick black line) and En-GARD predicted annual precipitation totals when applied to ERAi and as averaged over the Pacific Northwest region. Coefficients of determination r2 are reported in the legend with root-mean-square error in parentheses.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

Fig. 10.
Fig. 10.

Changes in mean precipitation predicted by GARD across six GCMs using analog–regression applied to precipitation, U, and V.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

Underestimating changes in extreme events is a problem for flood characterization, and it also likely leads to a smaller increase in the annual precipitation totals. The change in totals will be more problematic for regions that derive a large fraction of their annual precipitation from few precipitation events. Underestimating changes in extreme events have been documented in past work (Gachon and Dibike 2007), which suggested that SDSM is conservative in its change signals and probably underestimates the change in even temperature due to flattening of regression slopes in the presence of random noise.

There is a larger difference between different predictor sets when predicting the climate change signal, particularly for precipitation (Fig. 11). Past work has typically been carried out using a single variable (e.g., BCSD, LOCA), or in some cases a few variables without significant analysis of the effect this choice has on the projected change. For example, several studies have used specific humidity, U, and V and predicted increases in precipitation (Gagnon et al. 2005; Hassan et al. 2013; Dibike and Coulibaly 2005) as in the current study, while other studies have limited themselves to surface pressure fields because they believe this to be better simulated by global models (Langousis and Kaleris 2014).

Fig. 11.
Fig. 11.

Changes in mean precipitation averaged across GCMs as downscaled by En-GARD using (top) analog–regression, (middle) pure regression, and (bottom) pure analog applied to (left) precipitation; (center) precipitation, U, and V; and (right) water vapor, U, and V.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

Fig. 12.
Fig. 12.

Changes in mean annual temperature averaged across GCMs as predicted by En-GARD using, (top) analog–regression, (middle) pure regression, and (bottom) pure analog as applied to (left) temperature; (center) temperature, U, and V; and (right) precipitation, U, V, and temperature.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

Fig. 13.
Fig. 13.

Time series of the number of grid cells with an extreme precipitation event in each 5-yr period beginning on the year shown, shown for (top) En-GARD analog–regression applied to precipitation, U, and V from all climate models and (bottom) from the LOCA statistical downscaling method for a larger set of climate models.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

Fig. 14.
Fig. 14.

The difference in the standard deviations of the precipitation change: (left) differences between predictor variable and method, (center) difference between predictor variable and GCM, and (right) difference between method choice and GCM. Purple colors indicate that the first choice is associated with more variance, orange indicates that the second choice has more associated variance.

Citation: Journal of Hydrometeorology 23, 10; 10.1175/JHM-D-21-0142.1

More generally, we have shown that the factors that lead to the most variance in the climate change signal are, first, the predictors chosen, followed in some places by the choice of climate model (within the limited set used here). More work is needed to see how consistent this result is across a broader selection of climate models, and across climate models without the intermediate WRF simulation. While one might be tempted to discard the specific humidity, U, V predictor set, it is often considered a better variable set to use because it does not rely on the GCM precipitation, which is not considered to be well simulated. A better set of predictors might include atmospheric stability and temperature or relative humidity too, but this has yet to be investigated. It is interesting that the specific humidity, U, V projection is consistent with convection-permitting pseudo-global-warming (PGW) WRF simulations (Liu et al. 2017), which project increases in precipitation throughout the domain. In contrast, though, the skill with which the analog–regression applied to precipitation, U, and V is able to represent historical year-to-year variability suggests it may be more reliable.

Future applications and evaluations of statistical downscaling methods should evaluate the use of different predictor variables instead focusing on different algorithmic decisions; however, extreme caution is needed when interpretation changes in extremes if they are not predictable in the historical period. There are many additional approaches to downscaling that also deserve evaluation. En-GARD does not emulate spatially based approaches such as LOCA (Pierce et al. 2014) or MACA (Abatzoglou and Brown 2012), nor does it permit multivariate bias correction as in Cannon, (2018). Although the correlation between temperature and precipitation stochastic terms can provide some of the multivariate behavior that impacts many applications—for example, fire weather, hydrology, and agriculture, more could be done to explicitly estimate those relationships to better predict changes in them in the future. With the use of spatially correlated random fields, En-GARD can be considered to be a form of conditioned weather generator but does not function as a pure weather generator.

6. Conclusions

En-GARD provides a unified framework to compare different statistical downscaling products. As a result, choices in downscaling methods can be easily tested. This simplifies the process of generating large ensembles of downscaled climate projections needed to quantify the uncertainty in future projections that stems from the downscaling choices. Unlike SDSM, En-GARD has been developed for application to large, gridded datasets on high-performance computing systems, with batch processing capabilities that make it a useful tool for regional and larger domain applications. It is open source and can be compiled with freely available compilers to run on low-cost Linux (operating system) computers while taking advantage of multiple cores on modern computers.

In this example, the variability between different GCMs is greater than between downscaling methods, particularly for temperature. For precipitation, the choice of predictor variables generally has the largest effect on the climate change signal; hence, carefully selecting these predictors is important. All GCMs, downscaling algorithms, and predictors yielded a realistic mean climate state in the historical period. This adds to the growing realization that evaluating historical mean states is not adequate to select the best method for future projections because such “history matching” typically provides little information about the robustness of the method for inferring very different projected changes.

The selection of predictor variables is shown to be more important than other downscaling methodological decisions. Any selection of input variables can reproduce current climate statistics. However, including water vapor produces a larger climate change in precipitation because historical variability in water vapor is mapped on to future changes, which are themselves driven by the Clausius–Clapeyron relationship and changes in temperature.

A weak relationship between observed and modeled extreme events prevents precipitation extremes created here from changing significantly in the future, despite a reasonable representation in the historical period and a consistent pattern of increases in the forcing model precipitation. This effect likely applies to numerous other downscaling methods that follow a similar approach (Wilby et al. 2002; Langousis and Kaleris 2014). While bias-correction approaches (quantile mapping and delta methods) are able to change extremes in ways consistent with historical records, the common implementation of most such approaches is to rely almost entirely on precipitation from the driving coarse-scale model.

Connecting atmospheric states to downscaling method enables a more process-oriented approach. We suggest that integrating upper-level winds from the atmospheric model into the downscaling process permits the precipitation climate change projection to be connected to, e.g., topographic barriers realistically. We have shown that the use of upper-level winds improves the prediction of year-to-year variability when compared with the observed precipitation. However, the methods used here may underestimate changes in extreme and mean precipitation amounts.

Acknowledgments.

This research has been supported by a cooperative agreement with the U.S. Bureau of Reclamation (USBR), and a contract with the U.S. Army Corps of Engineers (USACE), with technical development supported by a grant (80NSSC17K0541) from the NASA AIST program. The National Center for Atmospheric Research is sponsored by the National Science Foundation (NSF AGS-0753581).

Data availability statement.

The source code for En-GARD is available online (http://github.com/NCAR/gard); version 1.0 was used for this work. The Newman et al. (2015) ensemble was provided by the National Center for Atmospheric Research (http://dx.doi.org/10.5065/D6TH8JR2). Climate model data were retrieved from the Centre for Environmental Data Analysis (CEDA) archive. The Maurer et al. (2002) dataset is available from Santa Clara University (https://www.engr.scu.edu/∼emaurer/gridded_obs/index_gridded_obs.html). The ERA-Interim reanalysis data are available from ECMWF (https://apps.ecmwf.int/datasets/data/interim-full-daily/). En-GARD depends on the open source LAPACK (Anderson et al. 1999) and NetCDF (Rew and Davis 1990) libraries. Our analysis was performed using the Python programming language and made use of the following open-source libraries: Matplotlib (Hunter 2007), Xarray (Hoyer and Hamman 2017), Cartopy (Met Office 2010), Pandas (McKinney 2010), and NumPy (Harris et al. 2020).

REFERENCES

  • Abatzoglou, J. T., and T. J. Brown, 2012: A comparison of statistical downscaling methods suited for wildfire applications. Int. J. Climatol., 32, 772780, https://doi.org/10.1002/joc.2312.

    • Search Google Scholar
    • Export Citation
  • Altman, N. S., 1992: An introduction to kernel and nearest-neighbor nonparametric regression. Amer. Stat., 46, 175185, https://doi.org/10.2307/2685209.

    • Search Google Scholar
    • Export Citation
  • Anderson, E., and Coauthors, 1999: LAPACK Users’ Guide. SIAM, 404 pp., https://doi.org/10.1137/1.9780898719604.

  • Benestad, R. E., 2004: Empirical-statistical downscaling in climate modeling. Eos, Trans. Amer. Geophys. Union, 85, 417422, https://doi.org/10.1029/2004EO420002.

    • Search Google Scholar
    • Export Citation
  • Breiman, L., J. Friedman, C. J. Stone, and R. A. Olshen, 1984: Classification and Regression Trees. Chapman and Hall/CRC, 368 pp.

  • Bruyère, C. L., J. M. Done, G. J. Holland, and S. Fredrick, 2013: Bias corrections of global models for regional climate simulations of high-impact weather. Climate Dyn., 43, 18471856, https://doi.org/10.1007/s00382-013-2011-6.

    • Search Google Scholar
    • Export Citation
  • Cannon, A. J., 2018: Multivariate quantile mapping bias correction: An N-dimensional probability density function transform for climate model simulations of multiple variables. Climate Dyn., 50, 3149, https://doi.org/10.1007/s00382-017-3580-6.

    • Search Google Scholar
    • Export Citation
  • Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model implementation and sensitivity. Mon. Wea. Rev., 129, 569585, https://doi.org/10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, M. P., and L. E. Hay, 2004: Use of medium-range numerical weather prediction model output to produce forecasts of streamflow. J. Hydrometeor., 5, 1532, https://doi.org/10.1175/1525-7541(2004)005<0015:UOMNWP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, M. P., and A. G. Slater, 2006: Probabilistic quantitative precipitation estimation in complex terrain. J. Hydrometeor., 7, 322, https://doi.org/10.1175/JHM474.1.

    • Search Google Scholar
    • Export Citation
  • Cleveland, W. S., 1979: Robust locally weighted regression and smoothing scatterplots. J. Amer. Stat. Assoc., 74, 829836, https://doi.org/10.1080/01621459.1979.10481038.

    • Search Google Scholar
    • Export Citation
  • Crawford, T., N. Betts, and D. Favis-Mortlock, 2007: GCM grid-box choice and predictor selection associated with statistical downscaling of daily precipitation over Northern Ireland. Climate Res., 34, 145160, https://doi.org/10.3354/cr034145.

    • Search Google Scholar
    • Export Citation
  • Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553597, https://doi.org/10.1002/qj.828.

    • Search Google Scholar
    • Export Citation
  • Dibike, Y. B., and P. Coulibaly, 2005: Hydrologic impact of climate change in the Saguenay watershed: Comparison of downscaling methods and hydrologic models. J. Hydrol., 307, 145163, https://doi.org/10.1016/j.jhydrol.2004.10.012.

    • Search Google Scholar
    • Export Citation
  • Dixon, K. W., J. R. Lanzante, M. J. Nath, K. Hayhoe, A. Stoner, A. Radhakrishnan, V. Balaji, and C. F. Gaitán, 2016: Evaluating the stationarity assumption in statistically downscaled climate projections: Is past performance an indicator of future results? Climatic Change, 135, 395408, https://doi.org/10.1007/s10584-016-1598-0.

    • Search Google Scholar
    • Export Citation
  • Ehret, U., E. Zehe, V. Wulfmeyer, K. Warrach-Sagi, and J. Liebert, 2012: HESS Opinions “Should we apply bias correction to global and regional climate model data?” Hydrol. Earth Syst. Sci., 16, 33913404, https://doi.org/10.5194/hess-16-3391-2012.

    • Search Google Scholar
    • Export Citation
  • Gachon, P., and Y. Dibike, 2007: Temperature change signals in northern Canada: Convergence of statistical downscaling results using two driving GCMs. Int. J. Climatol., 27, 16231641, https://doi.org/10.1002/joc.1582.

    • Search Google Scholar
    • Export Citation
  • Gagnon, S., B. Singh, J. Rousselle, and L. Roy, 2005: An application of the statistical downscaling model (SDSM) to simulate climatic data for streamflow modelling in Québec. Can. Water Resour. J., 30, 297314, https://doi.org/10.4296/cwrj3004297.

    • Search Google Scholar
    • Export Citation
  • Gangopadhyay, S., M. Clark, and B. Rajagopalan, 2005: Statistical downscaling using K‐nearest neighbors. Water Resour. Res., 41, W02024, https://doi.org/10.1029/2004WR003444.

    • Search Google Scholar
    • Export Citation
  • Giorgi, F., and W. J. J. Gutowski, 2015: Regional dynamical downscaling and the CORDEX initiative. Annu. Rev. Environ. Resour., 40, 467490, https://doi.org/10.1146/annurev-environ-102014-021217.

    • Search Google Scholar
    • Export Citation
  • Glahn, H. R., and D. A. Lowry, 1972: The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteor., 11, 12031211, https://doi.org/10.1175/1520-0450(1972)011<1203:TUOMOS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gleick, P. H., 1986: Methods for evaluating the regional hydrologic impacts of global climatic changes. J. Hydrol., 88, 97116, https://doi.org/10.1016/0022-1694(86)90199-X.

    • Search Google Scholar
    • Export Citation
  • González-Rojí, S. J., R. L. Wilby, J. Sáenz, and G. Ibarra-Berastegi, 2019: Harmonized evaluation of daily precipitation downscaled using SDSM and WRF+WRFDA models over the Iberian Peninsula. Climate Dyn., 53, 14131433, https://doi.org/10.1007/s00382-019-04673-9.

    • Search Google Scholar
    • Export Citation
  • Grell, G. A., and S. R. Freitas, 2014: A scale and aerosol aware stochastic convective parameterization for weather and air quality modeling. Atmos. Chem. Phys., 14, 52335250, https://doi.org/10.5194/acp-14-5233-2014.

    • Search Google Scholar
    • Export Citation
  • Guo, Q., J. Chen, X. Zhang, M. Shen, H. Chen, and S. Guo, 2019: A new two-stage multivariate quantile mapping method for bias correcting climate model outputs. Climate Dyn., 53, 36033623, https://doi.org/10.1007/s00382-019-04729-w.

    • Search Google Scholar
    • Export Citation
  • Gutmann, E. D., R. M. Rasmussen, C. Liu, K. Ikeda, D. J. Gochis, M. P. Clark, J. Dudhia, and G. Thompson, 2012: A comparison of statistical and dynamical downscaling of winter precipitation over complex terrain. J. Climate, 25, 262281, https://doi.org/10.1175/2011JCLI4109.1.

    • Search Google Scholar
    • Export Citation
  • Gutmann, E., T. Pruitt, M. P. Clark, L. Brekke, J. R. Arnold, D. A. Raff, and R. M. Rasmussen, 2014: An intercomparison of statistical downscaling methods used for water resource assessments in the United States. Water Resour. Res., 50, 71677186, https://doi.org/10.1002/2014WR015559.

    • Search Google Scholar
    • Export Citation
  • Gutmann, E., I. Barstad, M. Clark, J. Arnold, and R. Rasmussen, 2016: The intermediate complexity atmospheric research model (ICAR). J. Hydrometeor., 17, 957973, https://doi.org/10.1175/JHM-D-15-0155.1.

    • Search Google Scholar
    • Export Citation
  • Haarsma, R. J., and Coauthors, 2016: High resolution model intercomparison project (HighResMIP v1.0) for CMIP6. Geosci. Model Dev., 9, 41854208, https://doi.org/10.5194/gmd-9-4185-2016.

    • Search Google Scholar
    • Export Citation
  • Hall, A., 2014: Projecting regional change. Science, 346, 14611462, https://doi.org/10.1126/science.aaa0629.

  • Harris, C. R., and Coauthors, 2020: Array programming with NumPy. Nature, 585, 357362, https://doi.org/10.1038/s41586-020-2649-2.

  • Hassan, Z., S. Shamsudin, and S. Harun, 2013: Application of SDSM and LARS-WG for simulating and downscaling of rainfall and temperature. Theor. Appl. Climatol., 116, 243257, https://doi.org/10.1007/s00704-013-0951-8.

    • Search Google Scholar
    • Export Citation
  • Hay, L. E., and M. P. Clark, 2003: Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States. J. Hydrol., 282, 5675, https://doi.org/10.1016/S0022-1694(03)00252-X.

    • Search Google Scholar
    • Export Citation
  • Hay, L. E., R. L. Wilby, and G. H. Leavesley, 2000: A comparison of delta change and downscaled GCM scenarios for three mountainous basins in the United States. J. Amer. Water Resour. Assoc., 36, 387397, https://doi.org/10.1111/j.1752-1688.2000.tb04276.x.

    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

    • Search Google Scholar
    • Export Citation
  • Hidalgo, H. G., M. D. Dettinger, and D. R. Cayan, 2008: Downscaling with constructed analogues: Daily precipitation and temperature fields over the United States. California Energy Commission PIER Final Project Rep. CEC-500-2007-123, 48 pp.

  • Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF single-moment 6-class microphysics scheme (WSM6). J. Korean Meteor. Soc., 42, 129151.

  • Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 23182341, https://doi.org/10.1175/MWR3199.1.

    • Search Google Scholar
    • Export Citation
  • Hoyer, S., and J. Hamman, 2017: xarray: N-D labeled arrays and datasets in Python. J. Open Res. Software, 5, 10, https://doi.org/10.5334/jors.148.

    • Search Google Scholar
    • Export Citation
  • Hunter, J. D., 2007: Matplotlib: A 2D graphics environment. Comput. Sci. Eng., 9, 9095, https://doi.org/10.1109/MCSE.2007.55.

  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long‐lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Kerr, R. A., 2011: Vital details of global warming are eluding forecasters. Science, 334, 173174, https://doi.org/10.1126/science.334.6053.173.

    • Search Google Scholar
    • Export Citation
  • Langousis, A., and V. Kaleris, 2014: Statistical framework to simulate daily rainfall series conditional on upper-air predictor variables. Water Resour. Res., 50, 39073932, https://doi.org/10.1002/2013WR014936.

    • Search Google Scholar
    • Export Citation
  • Lanzante, J. R., K. W. Dixon, M. J. Nath, C. E. Whitlock, and D. Adams-Smith, 2018: Some pitfalls in statistical downscaling of future climate. Bull. Amer. Meteor. Soc., 99, 791803, https://doi.org/10.1175/BAMS-D-17-0046.1.

    • Search Google Scholar
    • Export Citation
  • Letcher, T. W., and J. R. Minder, 2015: Characterization of the simulated regional snow albedo feedback using a regional climate model over complex terrain. J. Climate, 28, 75767595, https://doi.org/10.1175/JCLI-D-15-0166.1.

    • Search Google Scholar
    • Export Citation
  • Liu, C. , and Coauthors, 2017: Continental-scale convection-permitting modeling of the current and future climate of North America. Climate Dyn., 49, 71–95, https://doi.org/10.1007/s00382-016-3327-9.

    • Search Google Scholar
    • Export Citation
  • Livneh, B., E. A. Rosenberg, C. Lin, B. Nijssen, V. Mishra, K. M. Andreadis, E. P. Maurer, and D. P. Lettenmaier, 2013: A long-term hydrologically based dataset of land surface fluxes and states for the conterminous United States: Update and extensions. J. Climate, 26, 93849392, https://doi.org/10.1175/JCLI-D-12-00508.1.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2010: Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2017: Towards process-informed bias correction of climate change simulations. Nat. Climate Change, 7, 764773, https://doi.org/10.1038/nclimate3418.

    • Search Google Scholar
    • Export Citation
  • Maurer, E. P., A. W. Wood, J. C. Adam, D. P. Lettenmaier, and B. Nijssen, 2002: A long-term hydrologically based dataset of land surface fluxes and states for the conterminous United States. J. Climate, 15, 32373251, https://doi.org/10.1175/1520-0442(2002)015<3237:ALTHBD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • McKinney, W., 2010: Data structures for statistical computing in Python. Proc. Ninth Python in Science Conf., Austin, TX, SciPy, 56–61, https://doi.org/10.25080/Majora-92bf1922-00a.

  • Mearns, L. O., I. Bogardi, F. Giorgi, I. Matyasovszky, and M. Palecki, 1999: Comparison of climate change scenarios generated from regional climate model experiments and statistical downscaling. J. Geophys. Res., 104, 66036621, https://doi.org/10.1029/1998JD200042.

    • Search Google Scholar
    • Export Citation
  • Mearns, L. O., and Coauthors, 2013: Climate change projections of the North American Regional Climate Change Assessment Program (NARCCAP). Climatic Change, 120, 965975, https://doi.org/10.1007/s10584-013-0831-3.

    • Search Google Scholar
    • Export Citation
  • Mehrotra, R., and A. Sharma, 2016: A multivariate quantile-matching bias correction approach with auto- and cross-dependence across multiple time scales: Implications for downscaling. J. Climate, 29, 35193539, https://doi.org/10.1175/JCLI-D-15-0356.1.

    • Search Google Scholar
    • Export Citation
  • Mendoza, P. A., A. W. Wood, E. Clark, E. Rothwell, M. P. Clark, B. Nijssen, L. D. Brekke, and J. R. Arnold, 2017: An intercomparison of approaches for improving operational seasonal streamflow forecasts. Hydrol. Earth Syst. Sci., 21, 39153935, https://doi.org/10.5194/hess-21-3915-2017.

    • Search Google Scholar
    • Export Citation
  • Met Office, 2010: Cartopy: A cartographic Python library with a matplotlib interface. Accessed 14 July 2021, http://scitools.org.uk/cartopy.

  • Mizukami, N., and Coauthors, 2016: Implications of the methodological choices for hydrologic portrayals of climate change over the contiguous United States: Statistically downscaled forcing data and hydrologic models. J. Hydrometeor., 17, 73–98, https://doi.org/10.1175/JHM-D-14-0187.1.

  • Newman, A. J., and Coauthors, 2015: Gridded ensemble precipitation and temperature estimates for the contiguous United States. J. Hydrometeor., 16, 24812500, https://doi.org/10.1175/JHM-D-15-0026.1.

    • Search Google Scholar
    • Export Citation
  • Panofsky, H., and G. Brier, 1968: Some Applications of Statistics to Meteorology. The Pennsylvania State University, 224 pp.

  • Pierce, D. W., D. R. Cayan, and B. L. Thrasher, 2014: Statistical downscaling using localized constructed analogs (LOCA). J. Hydrometeor., 15, 25582585, https://doi.org/10.1175/JHM-D-14-0082.1.

    • Search Google Scholar
    • Export Citation
  • Rasmussen, R., and Coauthors, 2014: Climate change impacts on the water balance of the Colorado headwaters: High-resolution regional climate model simulations. J. Hydrometeor., 15, 10911116, https://doi.org/10.1175/JHM-D-13-0118.1.

    • Search Google Scholar
    • Export Citation
  • Rew, R., and G. Davis, 1990: NetCDF: An interface for scientific data access. IEEE Comput. Graph., 10, 7682, https://doi.org/10.1109/38.56302.

    • Search Google Scholar
    • Export Citation
  • Rummukainen, M., 1997: Methods for statistical downscaling of GCM simulations. SMHI Rep. RMK 80,, 44 pp., https://www.smhi.se/polopoly_fs/1.124322!/RMK_80.pdf.

  • Schiermeier, Q., 2010: The real holes in climate science. Nature, 463, 284287, https://doi.org/10.1038/463284a.

  • Skamarock, W. C. , and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., https://doi.org/10.5065/D68S4MVH.

  • Stoner, A. M. K., K. Hayhoe, X. Yang, and D. J. Wuebbles, 2013: An asynchronous regional regression model for statistical downscaling of daily climate variables. Int. J. Climatol., 33, 24732494, https://doi.org/10.1002/joc.3603.

    • Search Google Scholar
    • Export Citation
  • Takayabu, I., H. Kanamaru, K. Dairaku, R. Benestad, H. Von Storch, and J. H. Christensen, 2016: Reconsidering the quality and utility of downscaling. J. Meteor. Soc. Japan, 94A, 3145, https://doi.org/10.2151/jmsj.2015-042.

    • Search Google Scholar
    • Export Citation
  • Tang, G., M. P. Clark, S. M. Papalexiou, A. J. Newman, A. W. Wood, D. Brunet, and P. H. Whitfield, 2021: EMDNA: An ensemble meteorological dataset for North America. Earth Syst. Sci. Data, 13, 33373362, https://doi.org/10.5194/essd-13-3337-2021.

    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2012: Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods. J. Hydrol., 456–457, 1229, https://doi.org/10.1016/j.jhydrol.2012.05.052.

    • Search Google Scholar
    • Export Citation
  • Teutschbein, C., and J. Seibert, 2013: Is bias correction of regional climate model (RCM) simulations possible for non-stationary conditions? Hydrol. Earth Syst. Sci., 17, 50615077, https://doi.org/10.5194/hess-17-5061-2013.

    • Search Google Scholar
    • Export Citation
  • Thornton, P. E., S. W. Running, and M. A. White, 1997: Generating surfaces of daily meteorological variables over large regions of complex terrain. J. Hydrol., 190, 214251, https://doi.org/10.1016/S0022-1694(96)03128-9.

    • Search Google Scholar
    • Export Citation
  • Timm, O. E., T. W. Giambelluca, and H. F. Diaz, 2015: Statistical downscaling of rainfall changes in Hawai’i based on the CMIP5 global model projections. J. Geophys. Res. Atmos., 120, 92112, https://doi.org/10.1002/2014JD022059.

    • Search Google Scholar
    • Export Citation
  • Vrac, M., 2018: Multivariate bias adjustment of high-dimensional climate simulations: The Rank Resampling for Distributions and Dependences (R2D2) bias correction. Hydrol. Earth Syst. Sci., 22, 31753196, https://doi.org/10.5194/hess-22-3175-2018.

    • Search Google Scholar
    • Export Citation
  • Walton, D. B., F. Sun, A. Hall, and S. Capps, 2015: A hybrid dynamical-statistical downscaling technique. Part I: Development and validation of the technique. J. Climate, 28, 45974617, https://doi.org/10.1175/JCLI-D-14-00196.1.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., 1998: Statistical downscaling of daily precipitation using daily airflow and seasonal teleconnection indices. Climate Res., 10, 163178, https://doi.org/10.3354/cr010163.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., and T. Wigley, 1997: Downscaling general circulation model output: A review of methods and limitations. Prog. Phys. Geogr., 21, 530548, https://doi.org/10.1177/030913339702100403.

    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., C. W. Dawson, and E. M. Barrow, 2002: SDSM – A decision support tool for the assessment of regional climate change impacts. Environ. Modell. Software, 17, 145159, https://doi.org/10.1016/S1364-8152(01)00060-3.

    • Search Google Scholar
    • Export Citation
  • Wood, A. W., E. P. Maurer, A. Kumar, and D. P. Lettenmaier, 2002: Long-range experimental hydrologic forecasting for the eastern United States. J. Geophys. Res., 107, 4429, https://doi.org/10.1029/2001JD000659.

    • Search Google Scholar
    • Export Citation
  • Wood, A. W., L. R. Leung, V. Sridhar, and D. P. Lettenmaier, 2004: Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Climatic Change, 62, 189216, https://doi.org/10.1023/B:CLIM.0000013685.99609.9e.

    • Search Google Scholar
    • Export Citation
  • Zamora, R. A., B. F. Zaitchik, M. Rodell, A. Getirana, S. Kumar, K. Arsenault, and E. Gutmann, 2021: Contribution of meteorological downscaling to skill and precision of seasonal drought forecasts. J. Hydrometeor., 22, 2009–2031, https://doi.org/10.1175/JHM-D-20-0259.1.

    • Search Google Scholar
    • Export Citation

Supplementary Materials

Save
  • Abatzoglou, J. T., and T. J. Brown, 2012: A comparison of statistical downscaling methods suited for wildfire applications. Int. J. Climatol., 32, 772780, https://doi.org/10.1002/joc.2312.

    • Search Google Scholar
    • Export Citation
  • Altman, N. S., 1992: An introduction to kernel and nearest-neighbor nonparametric regression. Amer. Stat., 46, 175185, https://doi.org/10.2307/2685209.

    • Search Google Scholar
    • Export Citation
  • Anderson, E., and Coauthors, 1999: LAPACK Users’ Guide. SIAM, 404 pp., https://doi.org/10.1137/1.9780898719604.

  • Benestad, R. E., 2004: Empirical-statistical downscaling in climate modeling. Eos, Trans. Amer. Geophys. Union, 85, 417422, https://doi.org/10.1029/2004EO420002.

    • Search Google Scholar
    • Export Citation
  • Breiman, L., J. Friedman, C. J. Stone, and R. A. Olshen, 1984: Classification and Regression Trees. Chapman and Hall/CRC, 368 pp.

  • Bruyère, C. L., J. M. Done, G. J. Holland, and S. Fredrick, 2013: Bias corrections of global models for regional climate simulations of high-impact weather. Climate Dyn., 43, 18471856, https://doi.org/10.1007/s00382-013-2011-6.

    • Search Google Scholar
    • Export Citation
  • Cannon, A. J., 2018: Multivariate quantile mapping bias correction: An N-dimensional probability density function transform for climate model simulations of multiple variables. Climate Dyn., 50, 3149, https://doi.org/10.1007/s00382-017-3580-6.

    • Search Google Scholar
    • Export Citation
  • Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model implementation and sensitivity. Mon. Wea. Rev., 129, 569585, https://doi.org/10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, M. P., and L. E. Hay, 2004: Use of medium-range numerical weather prediction model output to produce forecasts of streamflow. J. Hydrometeor., 5, 1532, https://doi.org/10.1175/1525-7541(2004)005<0015:UOMNWP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, M. P., and A. G. Slater, 2006: Probabilistic quantitative precipitation estimation in complex terrain. J. Hydrometeor., 7, 322, https://doi.org/10.1175/JHM474.1.

    • Search Google Scholar
    • Export Citation
  • Cleveland, W. S., 1979: Robust locally weighted regression and smoothing scatterplots. J. Amer. Stat. Assoc., 74, 829836, https://doi.org/10.1080/01621459.1979.10481038.

    • Search Google Scholar
    • Export Citation
  • Crawford, T., N. Betts, and D. Favis-Mortlock, 2007: GCM grid-box choice and predictor selection associated with statistical downscaling of daily precipitation over Northern Ireland. Climate Res., 34, 145160, https://doi.org/10.3354/cr034145.

    • Search Google Scholar
    • Export Citation
  • Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553597, https://doi.org/10.1002/qj.828.

    • Search Google Scholar
    • Export Citation
  • Dibike, Y. B., and P. Coulibaly, 2005: Hydrologic impact of climate change in the Saguenay watershed: Comparison of downscaling methods and hydrologic models. J. Hydrol., 307, 145163, https://doi.org/10.1016/j.jhydrol.2004.10.012.

    • Search Google Scholar
    • Export Citation
  • Dixon, K. W., J. R. Lanzante, M. J. Nath, K. Hayhoe, A. Stoner, A. Radhakrishnan, V. Balaji, and C. F. Gaitán, 2016: Evaluating the stationarity assumption in statistically downscaled climate projections: Is past performance an indicator of future results? Climatic Change, 135, 395408, https://doi.org/10.1007/s10584-016-1598-0.

    • Search Google Scholar
    • Export Citation
  • Ehret, U., E. Zehe, V. Wulfmeyer, K. Warrach-Sagi, and J. Liebert, 2012: HESS Opinions “Should we apply bias correction to global and regional climate model data?” Hydrol. Earth Syst. Sci., 16, 33913404, https://doi.org/10.5194/hess-16-3391-2012.

    • Search Google Scholar
    • Export Citation
  • Gachon, P., and Y. Dibike, 2007: Temperature change signals in northern Canada: Convergence of statistical downscaling results using two driving GCMs. Int. J. Climatol., 27, 16231641, https://doi.org/10.1002/joc.1582.

    • Search Google Scholar
    • Export Citation
  • Gagnon, S.,