Reducing Model Structural Uncertainty in Climate Model Projections—A Rank-Based Model Combination Approach

R. Das Bhowmik Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, North Carolina

Search for other papers by R. Das Bhowmik in
Current site
Google Scholar
PubMed
Close
,
A. Sharma School of Civil and Environmental Engineering, University of New South Wales, Sydney, New South Wales, Australia

Search for other papers by A. Sharma in
Current site
Google Scholar
PubMed
Close
, and
A. Sankarasubramanian Department of Civil, Construction, and Environmental Engineering, North Carolina State University, Raleigh, North Carolina

Search for other papers by A. Sankarasubramanian in
Current site
Google Scholar
PubMed
Close
Full access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

Future changes in monthly precipitation are typically evaluated by estimating the shift in the long-term mean/variability or based on the change in the marginal distribution. General circulation model (GCM) precipitation projections deviate across various models and emission scenarios and hence provide no consensus on the expected future change. The current study proposes a rank/percentile-based multimodel combination approach to account for the fact that alternate model projections do not share a common time indexing. The approach is evaluated using 10 GCM historical runs for the current period and is validated by comparing with two approaches: equal weighting and a non-percentile-based optimal weighting. The percentile-based optimal combination exhibits lower values of RMSE in estimating precipitation terciles. Future (2000–49) multimodel projections show that January and July precipitation exhibit an increase in simulated monthly extremes (25th and 75th percentiles) over many climate regions of the conterminous United States.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Rajarshi Das Bhowmik, rdasbho@ncsu.edu

Abstract

Future changes in monthly precipitation are typically evaluated by estimating the shift in the long-term mean/variability or based on the change in the marginal distribution. General circulation model (GCM) precipitation projections deviate across various models and emission scenarios and hence provide no consensus on the expected future change. The current study proposes a rank/percentile-based multimodel combination approach to account for the fact that alternate model projections do not share a common time indexing. The approach is evaluated using 10 GCM historical runs for the current period and is validated by comparing with two approaches: equal weighting and a non-percentile-based optimal weighting. The percentile-based optimal combination exhibits lower values of RMSE in estimating precipitation terciles. Future (2000–49) multimodel projections show that January and July precipitation exhibit an increase in simulated monthly extremes (25th and 75th percentiles) over many climate regions of the conterminous United States.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Rajarshi Das Bhowmik, rdasbho@ncsu.edu

1. Introduction

Future projections of the global climate are simulated by general circulation models (GCMs) given representative concentration pathways (RCPs) of carbon emissions (Taylor et al. 2012). Previous studies conclude that global mean surface air temperature has increased over the last century, and changes in precipitation are observed that are notable for increases in the magnitude, trend, and changes in the frequency of extreme events at regional and continental scales (Plummer et al. 1999; Alexander et al. 2006; Qin et al. 2010; IPCC 2013). Alternate GCMs may exhibit similar trends over long periods (e.g., decades), but different initializations, model structures, and parameterizations lead to significant differences in monthly simulated values from multiple models or even multiple ensemble members from the same model (Johnson et al. 2011; Das Bhowmik et al. 2017). For example, Das Bhowmik et al. (2017) showed that all three GCMs they examined (IPSL, CNRM, and MPI) performed poorly in estimating the cross-correlation between monthly precipitation and monthly average surface temperature; however IPSL performs slightly better than the other two GCMs. Future runs of GCMs predict different magnitudes of climate change compared to the twentieth-century climate.

Any structural bias that is present in current-day simulations may be different from the bias present in future projections (Nahar et al. 2017). Model combination studies typically assume the structural stationarity in the form of bias present. Hawkins and Sutton (2009) showed that model uncertainty is the dominant source of near-term uncertainty. It is common to expect that uncertainty in weather and climate forecasts is reduced based on a multimodel combination that evaluates an individual model performance by comparing it against observed records (Krishnamurti et al. 1999; Mason and Mimmack 2002; Rajagopalan et al. 2002). However, GCM historical runs and future projections under different RCPs do not exhibit temporal correspondence with observed records or with each other. Given this temporal mismatch, it is difficult to combine projections to reduce the uncertainty across models using conventional combination algorithms that have been adopted in developing multimodel weather and climate forecasts (Robertson et al. 2004; Chowdhury and Sharma 2009; Devineni and Sankarasubramanian 2010a,b). This study proposes an algorithm for combining multiple GCMs for developing climate change projections of monthly precipitation with reduced uncertainty. We propose a common rank indexing to relate all model runs, thereby allowing for a detailed characterization of the probability distribution of the variables of interest and, in particular, changes to extreme rainfall in future warmer climates. There are some similarities in our proposed multimodel combination approach to GCM downscaling algorithms (Giorgi et al. 2001; Hay and Clark 2003; Leung et al. 2003; Wood et al. 2004; Wilby et al. 2002; Gangopadhyay et al. 2004; Fowler et al. 2007; Maurer and Hidalgo 2008) in that both try to reduce bias and uncertainty in certain spatial scales, but our proposed methodology is distinct in that we are focused on finding the most reliable combinations of multiple GCMs. This approach has the advantage of leveraging predictions from multiple GCM realizations when developing a composite projection. A detailed discussion on various statistical downscaling approaches can be found in Das Bhowmik et al. 2017.

The easiest approach to reduce uncertainty in climate projections from multiple models is based on equal weighting, which assigns weights to the GCMs as the inverse of the total number of models (Hagedorn et al. 2005). Using a synthetic setup, Weigel et al. (2010) showed that optimal weighting performs better than equal weighting irrespective of the increase or decrease of the joint error fraction (which can be considered as the amount of correlated error) if the weights are properly estimated. Multimodel combination can be performed by incorporating numerous approaches such as simple regression (Krishnamurti et al. 1999), optimal weights based on long-term performance (Rajagopalan et al. 2002), statistical estimation of weights conditioned on the dominant prediction conditions (Devineni and Sankarasubramanian 2010a,b; Li and Sankarasubramanian 2012), dynamic pairwise weighting based on logistic regression (Chowdhury and Sharma 2009), and a Lagrangian approach to incorporate spatial covariance (Khan et al. 2014), to name a few. Multimodel combination approaches based on Bayesian statistics, such as objective Bayesian analysis, were also applied for model weighting (DelSole 2007). Vrugt and Robinson (2007) compared the Bayesian model averaging with the ensemble Kalman filter in the context of probabilistic streamflow forecasting. Models are weighted based on their prior performance (Rajagopalan et al. 2002) to incorporate the temporal variation of the component model skill. A recent study (Broderick et al. 2016) related to the transferability of hydrologic models between contrasting regimes reported that Bayesian model averaging (BMA) and Granger–Ramanathan averaging (GRA) outperformed the use of a simple arithmetic mean (SAM) and Akaike information criteria averaging (AICA). In dynamic model combination (Chowdhury and Sharma 2009; Devineni and Sankarasubramanian 2010a,b), model weights are allowed to change with respect to time. The dynamic combination approach has shown superiority in predicting SST and long-range Niño-3.4 over a static model combination (Chowdhury and Sharma 2009, 2011). Wasko et al. (2013) improved the spatial prediction of rainfall by combining a copula-based approach.

The current study aims to extend and modify the optimal model combination approach that has been applied in the context of weather and climate predictions, and streamflow forecasting, on climate projections. GCMs and regional climate models (RCMs) are the only reliable source to evaluate long-term changes in extreme events. Earlier studies (van Pelt et al. 2012; Beniston et al. 2007) considered an uncertainty envelope or equal weighting for the high-frequency behaviors of climate models in projecting extremes. A relatively recent study (Sanderson et al. 2015) proposed a method for combining model results into a single or multivariate distribution to account for the large degree of uncertainty across model precipitation amounts. One of the objectives of the current study is to reduce uncertainty in monthly climate change projections by developing a performance-based optimal weighting approach that combines asynchronous observations and model projections. The intent is to analyze the distributions of monthly multimodel projections for identifying the shift in extremes in the monthly climate over the conterminous United States (CONUS). It is proposed that the combination weights across multiple runs vary depending on percentile based on their ability to predict the observed precipitation at that percentile. In other words, models can be assigned a different weight when ascertaining the higher percentiles for a variable of interest, as compared to the lower percentiles. Percentile-based optimal weighting is validated by comparing this approach with equal weighting and non-percentile-based optimal weighting and finally by applying it to future projections. Shifts in the climate extremes for the future period compared to the historical are calculated by considering the 25th and 75th percentile values (or values lesser and greater than the 25th and 75th percentiles) of the monthly precipitation. We perform all assessments based on rigorous validation, leading to conclusions that are indicative of how each combination can be expected to perform in the future.

2. Dataset

Monthly precipitation data are obtained from 20 ensemble members of 10 GCMs’ historical runs. Details of the GCMs are given in Table 1. Historical runs are cropped over the CONUS and regridded to 1° × 1° grid cells. Twentieth-century control runs are used for development and validation of the model combination approach, and emission scenario RCP8.5 is used for the future projections corresponding to the period 2000–49. The simulated dataset is recentered and rescaled [Eq. (1) in section 3a] based on the standardization procedure described in Mehrotra and Sharma (2012). The GCM historical runs are the part of the Coupled Model Intercomparison Project phase 5 (CMIP5).

Table 1.

Description of GCMs. (Expansions of acronyms are available online at http://www.ametsoc.org/PubsAcronymList.)

Table 1.

The observed precipitation values at 1° × 1° grid cells are obtained from the United States Bureau of Reclamation (BOR) database (Maurer et al. 2002). We consider the dataset for the period 1950–99 for model development and validation.

3. Methodology

Monthly precipitation values from 20 GCM ensemble members and the corresponding observed precipitation values for the period 1950–99 at each grid cell are first converted to the ranked space. The ranked dataset does not represent time dependence but allows a comparison conditioned on the pth percentile. Basically, for a given grid point, we sort precipitation values over T number of years for a given month and for a given model, and then the process is repeated for 20 members. As the aim of our study is to combine projections across multiple models (GCMs) to formulate a single probabilistic representation of the variable of interest, the approach adopted consists of two key steps. In the first step, a sample representing model deviations from the observed for a given percentile across all models of interest is constructed. In the second step, a combination algorithm that takes into account the joint dependence exhibited in that sample across all models is used for developing multimodel projections. The result is an optimally weighted combination that provides a unique multimodel projection for that percentile.

To construct the sample representing each percentile, a neighbor matrix having k neighbors representing the model deviations for the pth percentile is constructed. Model deviations vary with the choice of k neighbors, so we refer the model deviation conditioned on the choice of k as the “sample.” Model weights are then calculated by minimizing the expected forecast error variance with a constraint that forces the model weights to add up to one to ensure unbiased combined projections (Timmermann 2006). A weighted average of the model runs/projections for that percentile then forms the combined modeled output.

In the results that follow, the percentile-based optimal weighting scheme is compared with a non-percentile-based optimal weighting scheme (single optimal weight across all percentiles) and an equal weighting scheme (equal preference to all models). In these benchmark methods, weights are not conditioned on percentiles. For the equal weighting approach, model weights are inversely related to the number of GCMs, and for the non-percentile-based optimal weighting scheme we use the entire rank space to minimize the mean square error. During the specification of the model combination approach for a particular grid cell, precipitation projections and observed values from four nearby grid cells are included to create a sufficiently large sample size that stabilizes the estimation of the error variance–covariance matrix (Clemen and Winkler 1986) for the model combination and also to ensure a smoother representation of the combined output across space. We incorporated threefold cross-validation that divides data into three blocks and leaves out each block one at a time to ascertain a cross-validated outcome while the remaining blocks are used for the model development. To ensure that the present-day GCM runs (future projections) have the same (change) in the mean and standard deviation from the observed dataset, we first recenter and rescale the GCM runs using present-day climate. Thus, the present-day GCM run has the same mean and standard deviation as the observed, and the future GCM projections produce the climate change signal in the moments (Mehrotra and Sharma 2012). Model performance is assessed based on root-mean-square error (RMSE) for each tercile of the combined data. A detailed step-by-step discussion of the three approaches is presented below.

a. Multimodel algorithms

In this study, we consider a total of 10 GCMs resulting in a total of 20 ensemble members. For the purpose of the multimodel algorithm, each member is considered as a model, thereby providing M = 20 runs.

1) Equal weighting

  1. Extract the calibration dataset for the GCM and observed precipitation and , respectively, at the time step t (t = 1, …, T), for a grid point i (i = 1, …, I), month j (j = 1, …, 12), and for the model m (m = 1, …, M).

  2. Perform recentering and rescaling based on the GCM and the observed moments:
    e1
    where , and are the vectors of the model projection and the observed, respectively. The vectors have a dimension of [T, 1].
  3. Sort the vectors and separately in ascending order. The terms and are the GCM and the observed responses for the pth percentile (p = 0, …, 100), respectively.

  4. Repeat steps 1–3 for all models (m = 1, …, M) to form the GCM matrix :
    e2
    Similarly, is the observed vector in the rank space, defined as
    e3
  5. Calculate the model weights for the equal weighting. The model weights for the simple equal weighting are dependent on the number of models only. The GCM weights remain constant across grid points, models, and months:
    e4
    where is the set of model weights, and is the model weight for the model m.
  6. Determine the multimodel combined value for the calibration dataset using the following equation:
    e5

2) Non-percentile-based optimal weighting

  1. Repeat steps 1–4 from the previous algorithm to extract data, rescale, recenter, and sort the calibration dataset. The final product is the GCM response and observed response in the percentile space.

  2. Calculate the error matrix using the sorted observed and the sorted GCM response:
    e6
  3. Estimate the error variance–covariance matrix .

  4. Solve for the objective function by maximizing with model weights that are subject to the equality constraints in Eq. (6):
    e7
    where is the set of model weights, and is the model weight for the model m.
  5. Calculate the multimodel combined dataset from the non-percentile-based optimal weighting scheme:
    e8

3) Percentile-based optimal weighting

  1. Extract calibration dataset for the GCM and the observed precipitation and , respectively, at the time step t (t = 1, …, T), for grid point i (i = 1, …, I), month j (j = 1, …, 12), and model m (m = 1, …, M).

  2. Enhance the size of and by including n grid points (we considered n = 4) from nearby grid cells. These cells are selected based on the geodetic distances. We do not worry about the time correspondence between the observed and the simulated, as we will be calculating the weights in percentile space. Dimensions of and are the same as in previous algorithms [T, 1] but T is now n times larger than earlier.

  3. Recenter and rescale based on the GCM and the observed moments:
    e9
  4. Sort the vectors and separately in ascending order. The terms and are the GCM and the observed responses for the pth percentile (p = 0, …, 100), respectively.

  5. Repeat steps 1–3 for all models (m = 1, …, M) to form the GCM matrix :
    e10
  6. Construct the neighbor matrix for k neighbors of the percentile p. Our objective is to reduce the uncertainty in projecting . For the percentile p, select the top and bottom elements of the pth row of the matrix to form :
    e11
    And the corresponding observed response in the percentile space is
    e12
  7. Calculate the error matrix by subtracting the elements of and corresponding observed response in the percentile space:
    e13
  8. Estimate the error variance–covariance matrix
    e14
  9. Solve for the objective function by maximizing . As in the last algorithm, model weights are subject to
    e15
    where is the set of model weights associated with the pth percentile, and is the model weight for the model m.
  10. Calculate the model combined value for the calibration using following equation:
    e16

b. Application on validation dataset

This subsection discusses the extraction and model combination of validation data. Model weights are obtained from the model calibration and applied directly on the validation dataset or on future projections. We compare the performance of three multimodel combination approaches in reducing model uncertainty for the validation dataset, incorporating the observed monthly precipitation values. We apply the same weights on future projections for which the observed values are not available for obvious reasons. We first obtained the GCM projections and observed precipitation for the model development (including calibration and threefold validation) period (1950–99). Then, GCM projections were centered and scaled using the observed moments from the calibration dataset. Thus, the process retains the nonstationarity involved in the future projections. The following steps can be applied to future and validation projections of GCMs.

  1. Extract validation dataset for the GCM and the observed precipitation and , respectively, at the time step t (t = 1, …, T2), for grid point i (i = 1, …, I), month j (j = 1, …, 12), and model m (m = 1, …, M). , and are the vectors of the model projection and the observed, respectively. First, apply the bias correction:
    e17
    The vectors have a dimension of [T2, 1].
  2. Sort the vectors and in ascending order. The terms and are the GCM and the observed responses for the pth percentile (p = 0, …, 100), respectively.

  3. Repeat steps 1and 2 for all models (m = 1, …, M)

  4. Apply weights from the calibration dataset to obtain model combined projections for the validation period:
    e18
    We used and sets of model weights for equal weighting and non-percentile-based optimal weighting, respectively.
  5. Calculate the RMSE in projecting precipitation values within the percentile range of [p1, p2]:
    e19
    where s is the number of observations between p1 and p2.

c. Model development

In this section, we discuss the details regarding model development and the application of the model weights for future projections. Details of the model development and validation steps are provided in Fig. 1. Both the GCM and the observed dataset from the current period are considered for the model development and validation. We divided the temporal dimension of monthly precipitation at grid point i and for month j into two parts T1 and T2, where T1 and T2 cumulatively represent the entire observed time period T; T1 is used for the model development, while T2 is left for the validation. The validation process that we employ is called threefold validation (Nguyen et al. 2016).We used two-thirds of the dataset for the model development and the rest for validation. T1, the calibration dataset, was chosen three alternative ways, using the first, last, and middle two-thirds of the entire dataset. The validation can also be executed for the same grid point and month but with three nonoverlapping datasets.

Fig. 1.
Fig. 1.

Flowchart of model development, validation, and application to future projections.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

Given that we have 50 years of data, calibration blocks consist of 34 years of data for the equal weighting and non-percentile-based optimal weighting. However, for the percentile-based optimal weighting, we have increased the number of grid points to 200 by sampling neighboring grid points. Out of the 200 grid points, a total of 136 observations are used for the calibration in the latter case. Once the calibration and validation dataset are identified, we applied the three models on the dataset and determined the weights. The weights were retained to apply in the validation and future projections. For a particular approach, we applied the model weights on each validation block. To compare the performance of the three approaches, we divided the validation block into upper, lower, and middle terciles for evaluation. Monthly precipitation values conditioned from the zeroth to 33rd and from the 66th to 100th percentiles represent the lower tercile and upper tercile, respectively, and the entire rank space is designated as the overall percentile. RMSE values for the validation approach are calculated for each tercile. The idea is to understand the efficiency of a model combination approach in projecting monthly precipitation values conditioned on various percentiles within the validation block. RMSE values for a particular tercile are averaged across the three validation datasets.

d. Sensitivity analysis

A fixed value of k, which is the number of neighboring grid points for stabilizing the percentile estimation based on the error variance–covariance matrix, is chosen by performing a detailed sensitivity analysis. Figure 2a shows the fraction of total grid points where percentile-based equal weighting performs better than the equal weighting with varying values of k. The analysis is conducted with the calibration dataset. We calculated the RMSEs in estimating the upper, lower, and overall terciles of the calibration dataset. As k increases beyond 50, optimal weighting performs better than equal weighting in reducing the uncertainty for all terciles. Optimal weighting exhibits better performance in projecting the upper extremes of precipitation (percentiles > 66) than the lower extremes (percentiles < 33). Figure 2b shows the model weights, averaged over all percentiles and all grid points over the CONUS, for different k values. Model weights are stabilized as k tends toward 50, and one or two GCMs evolve apparently as superior to the other GCMs (e.g., CESM-CAM5 in Fig. 2b). We decided to employ a k value of 60 to conduct further analysis. An appropriate number (k = 60), bigger than the number of observations on a grid cell, stabilizes the matrix, which justifies our decision to include nearby grid points. The number is similar to that reported by Khan et al. (2014) in reducing the uncertainty in sea surface temperature forecasts.

Fig. 2.
Fig. 2.

(a) Calibration results showing the fraction of grid points where percentile-based optimal weighting performs better than equal weighting with lower values of RMSEs and (b) model weights from the performance-based optimal weighting scheme, averaged across all percentiles, with regard to k values.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

4. Results

a. Distribution of optimal weights

Three schemes (percentile-based optimal weighting, non-percentile-based optimal weighting, and equal weighting) are compared based on the reduction in RMSE values during validation. RMSE is calculated for three subsets of validation dataset: all percentiles and upper and lower terciles [henceforth referred to as RMSE (overall/upper/lower)]. We expect the percentile-based GCM weights to either vary or remain constant across percentiles. Figure 3 shows GCM weights across percentiles over three grid cells from the model development period. The larger panels are for the percentile-based scheme whereas the smaller panels are for the non-percentile-based scheme. The weight for each model is equal to 0.05 in the equal weighting approach as the total number of GCMs is 20. However, GCM weights for all three model combination schemes are different for different grid cells and months. Representative grid cells are randomly selected from the eastern, western, and central parts of the CONUS, and GCM weights are plotted for the month of January.

Fig. 3.
Fig. 3.

Model weights over three grid cells are shown (percentiles). The larger panels represent GCM weights from the percentile-based optimal weighting scheme, whereas the smaller panels are from the non-percentile-based optimal weighting scheme. Model weights are calculated for January.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

From Fig. 3, ACCESS1.3 and CNRM-CM5 perform better over grid cell 1 with the percentile-based scheme but for the non-percentile-based scheme ACCESS1.3 and CCSM5 are superior to other GCMs in the same grid cell. Similar observations are found for the other two grid cells also. We conclude that the GCM weights conditioned on percentiles differ from the GCM weights under the non-percentile-based approach, implying different capabilities of alternate GCMs in predicting precipitation of different magnitudes. Table 2 provides the RMSEs for three multimodel combination schemes for three grid cells for the validation period. The percentile- and non-percentile-based optimal weighting schemes both perform better than the equal weighting approach. For grid cell 1, percentile-based optimal weighting has a higher RMSE (upper) value than the non-percentile-based scheme, whereas for grid cell 2 the non-percentile-based scheme performs better with a reduced value of RMSE (lower). For all other cases, percentile-based optimal weighting has a lower RMSE value compared to non-percentile-based optimal weighting.

Table 2.

RMSE values for three grid cells for different schemes. Results are for January.

Table 2.

b. Performance under validation

We extended the comparison between the three schemes for the CONUS by calculating RMSEs for all grid points. Figure 4 shows the grid cells that experience improvements incorporating optimal weighting for January. The absolute difference of RMSEs (Figs. 4a–c indicate overall, lower, and upper, respectively) between two schemes is marked with color. Optimal weighting schemes exhibit higher differences in RMSE at the lowest tercile compared to other terciles. Equal weighting can reduce the uncertainty in the median of the projection. Hence the reduction in RMSE (overall) is not high as illustrated in Figs. 4b and 4c. Grid cells with improved RMSE values are scattered all over the CONUS without forming significant clusters. However, the RMSE is lower over the southern and western United States. Comparing the two optimal combination methods, the percentile-based optimal weighting scheme performs better on most of the grid points for RMSE (overall) and RMSE (lower), whereas non-percentile-based weighting shows superiority over the percentile-based scheme for RMSE (upper). We compared the three approaches with each other by calculating the fraction of total grid points in which one approach exhibits a lower value of RMSE compared to the others.

Fig. 4.
Fig. 4.

Grid cells where the first scheme for each panel exhibits a lower RMSE value compared to the second scheme are plotted. Colors represent the absolute difference (percent) in RMSE values from the two schemes. Shown are (a) RMSE (overall), (b) RMSE (lower), and (c) RMSE (upper). For example, the grid points in the left column and top row are the absolute difference in RMSEs (overall) between non-percentile-based optimal weighting and equal weighting. Results are for January.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

The results for January are summarized in Table 3. Optimal model combination schemes show better performance than the equal weighting for more than 55% of the grid cells. The non-percentile-based optimal weighting scheme shows improved performance compared to equal weighting over 56% of the total grid cells. However, the differences in performance between percentile- and non-percentile-based optimal schemes are slim. Non-percentile-based schemes perform better over 60% of the grid cells in reducing uncertainty in projecting the upper terciles. However, the percentile-based scheme exhibits either the same or an improved performance in the case of overall or lower terciles. For the upper terciles, the non-percentile-based scheme performs better than the percentile-based scheme because the percentile-based model weights reach a limiting state when the weights stop changing across percentiles. The sampling of neighboring grid points to attain a k value for the analysis also has the potential to reduce the variance of precipitation values within the upper terciles.

Table 3.

Percent of grid points on which the first scheme has a lower RMSE value than the second scheme. Results are for January.

Table 3.

c. Multimodel future projections

One major advantage of the percentile-based optimal weighting scheme is that weights do not vary depending on the time period of the prediction considered. GCM weights obtained from the calibration period, similar to the validation period, are next applied on the rank space of GCM projections for 2000–49 over the CONUS.

Percentile values of future monthly precipitation for January and July months are averaged over the selected NCDC (now known as NCEI) climate regions (Fig. 5). NCDC climate regions are considered for understanding current climate anomalies in a historical perspective (Karl and Koss 1984). The confidence bound represents 20 ensemble members, and the red and black lines are the observed (1950–99) and multimodel combined projections, respectively. Multimodel combined percentiles are within the confidence bound represented from the combination of 20 GCMs. During January, an upward shift in the winter precipitation compared to the earlier period confirms the increase in the extremes over the selected climate regions. Over the central United States, changes in precipitation during July are not very high compared to the second half of the twentieth century. Monthly precipitation for all percentiles increases for the northeastern United States during July and January. However, the amount of increase during January is higher for the middle terciles. Both months experience a shift in monthly precipitation for the upper terciles over the southwest United States. Over the western United States, the monthly precipitation shift is higher during January compared to July.

Fig. 5.
Fig. 5.

Nonexceedance probabilities of monthly precipitation for the model combination projection (black) and projections from 20 ensembles members (blue band) for the period 2000–49 and for the months of January and July. Nonexceedance probabilities are plotted for four NCDC climate regions: the northeastern (NE), central (C), southwestern (SW), and western (W) United States. Observed percentiles for the period 1950–99 are plotted as the red line. The percentile-based optimal combination approach is used.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

Figures 6a and 6b show changes in the 25th and 75th percentiles between two periods, 1950–99 and 2000–49. Winter extreme precipitation (75th percentile and higher) increases by more than 50 mm month−1 compared to the earlier period over the Frost Belt of the United States. During winter, both the 25th and 75th percentiles of precipitation experience increased values. Results from July are mixed and typical of declines in precipitation extremes by 20–30 mm month−1. However, the central United States exhibits an increase in extreme precipitation for July.

Fig. 6.
Fig. 6.

Grid cells exhibiting changes in the (a) 25th and (b) 75th percentile values between the periods 1950–99 and 2000–49. RCP8.5 is used for future projections.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

We attempted to understand the extent of the difference between equal weighting and optimal weighting (percentile based) in projecting future monthly precipitation. The difference is calculated by subtracting the equal weighting value from the percentile-based optimal weighting, and it is expressed as the percentage change on the equally combined projection. Results for four regions and two months are shown in Fig. 7.

Fig. 7.
Fig. 7.

The percent difference between the model combined future projections (2000–49) from two approaches, equal weighting and percentile-based optimal weighting, plotted against percentiles. Results are shown for four NCDC climate regions and for January and July (as in Fig. 5). The percent difference is calculated as the deviation of the optimal weighting from the equal weighting.

Citation: Journal of Climate 30, 24; 10.1175/JCLI-D-17-0225.1

Multimodel projected monthly precipitation amounts differ between the equal weighting and the percentile-based optimal weighting by 10%–15%. During both months, the difference is higher typically for the lower terciles. The model combined lower tercile values from the optimal weighting are greater than those of the equal weighting during January by 8% over the central and northeast United States. In contrast, the underestimation of the lower terciles from the optimal weighting during July is high over the Southwest and the West. However, for the upper terciles, the differences between the two approaches are slim.

5. Summary

A multimodel combination approach based on GCM performance is proposed to reduce the uncertainty in climate extreme projections. It is hypothesized that the nature of the model combination should vary depending on the relative magnitude of modeled precipitation, instead of assuming that the combination is a function of the number of models alone. Model weights, calculated from the approach, are conditioned on percentiles. Different models receive different weights at different percentiles when assessed in a validation setting, implying that the advantages of each model in the tail of the distribution are better exploited by this approach. The approach is applied to precipitation projections of 20 GCM ensemble members over the CONUS. On each grid point, the approach is calibrated by allowing neighboring grid points and validated by comparing this approach with the equal weighting and non-percentile-based optimal combination approaches. The study shows an improvement in the percentile-based approach over the other two and markedly different weights (equivalent here to model preferences) depending on whether low or high percentiles are of interest.

Optimal combination techniques perform better than the equal weighting approach for error measures across all terciles, whereas the equal weighting scheme inherently performs better in projecting the medians in precipitation projections. The percentile-based optimal weighting scheme is found to improve performance (in validation) over the non-percentile-based varying scheme comprehensively across the lower and middle terciles, with potential for greater improvements with the availability of additional data or more sophisticated procedures for pooling across space.

Finally, model weights, calculated during calibration, are applied on the future projection for the period 2000–49. The central, southwestern, and western United States experience increases in the upper percentiles of monthly precipitation during the winter months. Multimodel projected winter precipitation exhibits an expected increase with an amount up to 50 mm month−1, but expected changes in precipitation for the summer and fall months are small. The changes in extreme monthly precipitation, reported by the current study, cannot be verified because of the absence of observation data for the future period. However, the reported future changes are reliable as the optimal model combination is substantially validated with a historical dataset. Further, the future changes projected by the optimal combination are compared with the changes exhibited by equal weighting. The difference between the equally combined model projected values and optimally combined model projected values are typically small (within ±10%) over different climate regions of the United States.

By design, the percentile-based approach for model combination should outperform the non-percentile-based approaches when a clear preference exists for one or more models at one end of the probability distribution. Modeled precipitation attributes are known to be sensitive to both the physics parameterizations used and the solution by which the model balances fluxes. (Johnson et al. 2011). Furthermore, certain models are known to overestimate low rainfall compared to others (the so-called drizzle effect), with the ones doing so (Demirel and Moradkhani 2016) shifting the probability distribution of the simulated precipitation to the lower end. Consequently, there is reason to suspect that the percentile-based approach for model combination has merit, as long as the models being combined have varying abilities and characteristics (Hagedorn et al. 2005; Gleckler et al. 2008; Reichler and Kim 2008; Knutti et al. 2013), exhibiting biases that differ from each other in magnitude, form, and spatiotemporal extent (Knutti et al. 2010).

No single model emerged that has dominance over other models on most of the grid points over the CONUS, which restricts us from interpreting the physical significance of model weights. However, BCC-CSM and ACCESS emerge as two dominant GCMs on many grid points. The proposed optimal combination approach considers treating ensemble members individually, resulting in an increased number of ensemble members available for model combination. The availability of more ensemble members increases (reduces) the weight to be put on the best (worst) performing members under optimal combination, thereby reducing the estimates of internal variability associated with the climatic attribute (i.e., unpredictable noise). This is consistent with Weigel et al. (2010; see page 4186 under remark 3 in section 3c therein), who discussed how increasing the ensemble members could reduce the unpredictable noise with optimal combination giving higher weights for the best performing models. Steinschneider et al. (2015) reported that ignoring intermodel correlation on a regional scale can underestimate the variance of climate change. The current study converts the projected/simulated monthly precipitation into the rank space, and hence the intermodel correlation will always be one. We expected that the proposed approach should distinguish correlated models (in the original space) by assigning higher weights to one of the correlated models while leaving the rest with minimal or no weights. Many studies have analyzed CMIP5 projections in analyzing future changes in precipitation at different temporal and spatial scales (e.g., Chadwick et al. 2013; Kumar et al. 2013; Sillmann et al. 2013; Mehran et al. 2014). Sillmann et al. (2013) also found that BCC-CSM and ACCESS, the two models that exhibit superiority on multiple grid points in our current study, exhibited consistency in estimating precipitation indices across reanalysis. There is a spatial heterogeneity in the reliability of GCMs for predicting climate extremes over the CONUS. However, it was beyond the scope of the current study to evaluate the reasons related to the reliable predictions from BCC-CSM and ACCESS. While the current study used the continental United States as the focus domain, one should expect the advantages of the percentile-based approach to become even more evident as the assessment is extended to other regions. Similarly, given the added complexity that a percentile-based optimization brings into the equation, the application with longer observational datasets or better designed ways of pooling information across space should make the estimation of the covariance matrices more stable, imparting greater consistency and accuracy to the resulting projections.

Acknowledgments

This research is supported in part by the National Science Foundation under Grants CBET-1204368 and CBET-0954405. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table 1) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led the development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. The “Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections” archive is available at http://gdo-dcp.ucllnl.org/downscaled_cmip_projections/.

REFERENCES

  • Alexander, L. V., and Coauthors, 2006: Global observed changes in daily climate extremes of temperature and precipitation. J. Geophys. Res., 111, D05109, https://doi.org/10.1029/2005JD006290.

    • Search Google Scholar
    • Export Citation
  • Beniston, M., and Coauthors, 2007: Future extreme events in European climate: An exploration of regional climate model projections. Climatic Change, 81, 7195, https://doi.org/10.1007/s10584-006-9226-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Broderick, C., T. Matthews, R. L. Wilby, S. Bastola, and C. Murphy, 2016: Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods. Water Resour. Res., 52, 83438373, https://doi.org/10.1002/2016WR018850.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chadwick, R., I. Boutle, and G. Martin, 2013: Spatial patterns of precipitation change in CMIP5: Why the rich do not get richer in the tropics. J. Climate, 26, 38033822, https://doi.org/10.1175/JCLI-D-12-00543.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chowdhury, S., and A. Sharma, 2009: Long-range Niño-3.4 predictions using pairwise dynamic combinations of multiple models. J. Climate, 22, 793805, https://doi.org/10.1175/2008JCLI2210.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chowdhury, S., and A. Sharma, 2011: Global sea surface temperature forecasts using a pairwise dynamic combination approach. J. Climate, 24, 18691877, https://doi.org/10.1175/2010JCLI3632.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Clemen, R. T., and R. L. Winkler, 1986: Combining economic forecasts. J. Bus. Econ. Stat., 4, 3946.

  • Das Bhowmik, R., A. Sankarasubramanian, T. Sinha, J. Patskoski, G. Mahinthakumar, and K. E. Kunkel, 2017: Multivariate downscaling approach preserving cross correlations across climate variables for projecting hydrologic fluxes. J. Hydrometeor., 18, 21872204, https://doi.org/10.1175/JHM-D-16-0160.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DelSole, T., 2007: A Bayesian framework for multimodel regression. J. Climate, 20, 28102826, https://doi.org/10.1175/JCLI4179.1.

  • Demirel, M. C., and H. Moradkhani, 2016: Assessing the impact of CMIP5 climate multi-modeling on estimating the precipitation seasonality and timing. Climatic Change, 135, 357372, https://doi.org/10.1007/s10584-015-1559-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Devineni, N., and A. Sankarasubramanian, 2010a: Improved categorical winter precipitation forecasts through multimodel combinations of coupled GCMs. Geophys. Res. Lett., 37, L24704, https://doi.org/10.1029/2010GL044989.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Devineni, N., and A. Sankarasubramanian, 2010b: Improving the prediction of winter precipitation and temperature over the continental United States: Role of the ENSO state in developing multimodel combinations. Mon. Wea. Rev., 138, 24472468, https://doi.org/10.1175/2009MWR3112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fowler, H. J., S. Blenkinsop, and C. Tebaldi, 2007: Linking climate change modelling to impacts studies: Recent advances in downscaling techniques for hydrological modelling. Int. J. Climatol., 27, 15471578, https://doi.org/10.1002/joc.1556.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gangopadhyay, S., M. Clark, K. Werner, D. Brandon, and B. Rajagopalan, 2004: Effects of spatial and temporal aggregation on the accuracy of statistically downscaled precipitation estimates in the upper Colorado River basin. J. Hydrometeor., 5, 11921206, https://doi.org/10.1175/JHM-391.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., and Coauthors, 2001: Regional climate information: Evaluation and projections. Climate Change 2001: The Scientific Basis, J. T. Houghton et al., Eds., Cambridge University Press, 583–638.

  • Gleckler, P. J., K. E. Taylor, and C. Doutriaux, 2008: Performance metrics for climate models. J. Geophys. Res., 113, D06104, https://doi.org/10.1029/2007JD008972.

    • Search Google Scholar
    • Export Citation
  • Hagedorn, R., F. J. Doblas-Reyes, and T. N. Palmer, 2005: The rationale behind the success of multi-model ensembles in seasonal forecasting—I. Basic concept. Tellus, 57A, 219233, https://doi.org/10.1111/j.1600-0870.2005.00103.x.

    • Search Google Scholar
    • Export Citation
  • Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 10951107, https://doi.org/10.1175/2009BAMS2607.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hay, L. E., and M. P. Clark, 2003: Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States. J. Hydrol., 282, 5675, https://doi.org/10.1016/S0022-1694(03)00252-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • IPCC, 2013: Summary for policymakers. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 1–29, www.ipcc.ch/report/ar5/wg1.

  • Johnson, F., S. Westra, A. Sharma, and A. J. Pitman, 2011: An assessment of GCM skill in simulating persistence across multiple time scales. J. Climate, 24, 36093623, https://doi.org/10.1175/2011JCLI3732.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Karl, T. R., and W. J. Koss, 1984: Regional and national monthly, seasonal and annual temperature weighted by area 1895–1983. National Climatic Data Center Tech. Rep. Historical Climatology Series 4-3, 38 pp.

  • Khan, M. Z. K., R. Mehrotra, A. Sharma, and A. Sankarasubramanian, 2014: Global sea surface temperature forecasts using an improved multimodel approach. J. Climate, 27, 35053515, https://doi.org/10.1175/JCLI-D-13-00486.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2010: Challenges in combining projections from multiple climate models. J. Climate, 23, 27392758, https://doi.org/10.1175/2009JCLI3361.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knutti, R., D. Masson, and A. Gettelman, 2013: Climate model genealogy: Generation CMIP5 and how we got there. Geophys. Res. Lett., 40, 11941199, https://doi.org/10.1002/grl.50256.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., C. M. Kishtawal, T. E. LaRow, D. R. Bachiochi, Z. Zhang, C. E. Williford, S. Gadgil, and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285, 15481550, https://doi.org/10.1126/science.285.5433.1548.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kumar, S., V. Merwade, J. L. Kinter III, and D. Niyogi, 2013: Evaluation of temperature and precipitation trends and long-term persistence in CMIP5 twentieth-century climate simulations. J. Climate, 26, 41684185, https://doi.org/10.1175/JCLI-D-12-00259.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leung, R. L., Y. Qian, X. Bian, and A. Hunt, 2003: Hydroclimate of the western United States based on observations and regional climate simulation of 1981–2000. Part II: Mesoscale ENSO anomalies. J. Climate, 16, 19121928, https://doi.org/10.1175/1520-0442(2003)016<1912:HOTWUS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, W., and A. Sankarasubramanian, 2012: Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination. Water Resour. Res., 48, W12516, https://doi.org/10.1029/2011WR011380.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mason, S. J., and G. M. Mimmack, 2002: Comparison of some statistical methods of probabilistic forecasting of ENSO. J. Climate, 15, 829, https://doi.org/10.1175/1520-0442(2002)015<0008:COSSMO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maurer, E. P., and H. G. Hidalgo, 2008: Utility of daily vs. monthly large-scale climate data: An intercomparison of two statistical downscaling methods. Hydrol. Earth Syst. Sci., 12, 551563, https://doi.org/10.5194/hess-12-551-2008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maurer, E. P., A. W. Wood, J. C. Adam, D. P. Lettenmaier, and B. Nijssen, 2002: A long-term hydrologically based data set of land surface fluxes and states for the conterminous United States. J. Climate, 15, 32373251, https://doi.org/10.1175/1520-0442(2002)015<3237:ALTHBD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mehran, A., A. AghaKouchak, and T. J. Phillips, 2014: Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations. J. Geophys. Res. Atmos., 119, 16951707, https://doi.org/10.1002/2013JD021152.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mehrotra, R., and A. Sharma, 2012: An improved standardization procedure to remove systematic low frequency variability biases in GCM simulations. Water Resour. Res., 48, W12601, https://doi.org/10.1029/2012WR012446.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nahar, J., F. Johnson, and A. Sharma, 2017: Assessing the extent of non-stationary biases in GCMs. J. Hydrol., 549, 148162, https://doi.org/10.1016/j.jhydrol.2017.03.045.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nguyen, H., R. Mehrotra, and A. Sharma, 2016: Correcting for systematic biases in GCM simulations in the frequency domain. J. Hydrol., 538, 117126, https://doi.org/10.1016/j.jhydrol.2016.04.018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Plummer, N., and Coauthors, 1999: Changes in climate extremes over the Australian region and New Zealand during the twentieth century. Climatic Change, 42, 183202, https://doi.org/10.1023/A:1005472418209.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qin, N., X. Chen, G. Fu, J. Zhai, and X. Xue, 2010: Precipitation and temperature trends for the southwest China: 1960–2007. Hydrol. Processes, 24, 37333744, https://doi.org/10.1002/hyp.7792.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rajagopalan, B., U. Lall, and S. E. Zebiak, 2002: Categorical climate forecasts through regularization and optimal combination of multiple GCM ensembles. Mon. Wea. Rev., 130, 17921811, https://doi.org/10.1175/1520-0493(2002)130<1792:CCFTRA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Reichler, T., and J. Kim, 2008: How well do coupled models simulate today’s climate? Bull. Amer. Meteor. Soc., 89, 303311, https://doi.org/10.1175/BAMS-89-3-303.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Robertson, A. W., U. Lall, S. E. Zebiak, and L. Goddard, 2004: Improved combination of multiple atmospheric GCM ensembles for seasonal prediction. Mon. Wea. Rev., 132, 27322744, https://doi.org/10.1175/MWR2818.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sanderson, B. M., R. Knutti, and P. Caldwell, 2015: Addressing interdependency in a multimodel ensemble by interpolation of model properties. J. Climate, 28, 51505170, https://doi.org/10.1175/JCLI-D-14-00361.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sillmann, J., V. V. Kharin, X. Zhang, F. W. Zwiers, and D. Bronaugh, 2013: Climate extremes indices in the CMIP5 multimodel ensemble: Part 1. Model evaluation in the present climate. J. Geophys. Res. Atmos., 118, 17161733, https://doi.org/10.1002/jgrd.50203.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Steinschneider, S., R. McCrary, L. O. Mearns, and C. Brown, 2015: The effects of climate model similarity on probabilistic climate projections and the implications for local, risk-based adaptation planning. Geophys. Res. Lett., 42, 50145044, https://doi.org/10.1002/2015GL064529.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485498, https://doi.org/10.1175/BAMS-D-11-00094.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Timmermann, A., 2006: Forecast combinations. Handbook of Economic Forecasting, Vol. 1, G. Elliott, C. Granger, and A. Timmermann, Eds., Elsevier, 135–196.

    • Crossref
    • Export Citation
  • van Pelt, S. C., J. J. Beersma, T. A. Buishand, B. J. J. M. van den Hurk, and P. Kabat, 2012: Future changes in extreme precipitation in the Rhine basin based on global and regional climate model simulations. Hydrol. Earth Syst. Sci., 16, 45174530, https://doi.org/10.5194/hess-16-4517-2012.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vrugt, J. A., and B. A. Robinson, 2007: Treatment of uncertainty using ensemble methods: Comparison of sequential data assimilation and Bayesian model averaging. Water Resour. Res., 43, W01411, https://doi.org/10.1029/2005WR004838.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wasko, C., A. Sharma, and P. Rasmussen, 2013: Improved spatial prediction: A combinatorial approach. Water Resour. Res., 49, 39273935, https://doi.org/10.1002/wrcr.20290.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weigel, A. P., R. Knutti, M. A Liniger, and C. Appenzeller, 2010: Risks of model weighting in multimodel climate projections. J. Climate, 23, 41754191, https://doi.org/10.1175/2010JCLI3594.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., C. W. Dawson, and E. M. Barrow, 2002: SDSM—A decision support tool for the assessment of regional climate change impacts. Environ. Modell. Software, 17, 145157, https://doi.org/10.1016/S1364-8152(01)00060-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, A. W., L. R. Leung, V. Sridhar, and D. P. Lettenmaier, 2004: Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Climatic Change, 62, 189216, https://doi.org/10.1023/B:CLIM.0000013685.99609.9e.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save
  • Alexander, L. V., and Coauthors, 2006: Global observed changes in daily climate extremes of temperature and precipitation. J. Geophys. Res., 111, D05109, https://doi.org/10.1029/2005JD006290.

    • Search Google Scholar
    • Export Citation
  • Beniston, M., and Coauthors, 2007: Future extreme events in European climate: An exploration of regional climate model projections. Climatic Change, 81, 7195, https://doi.org/10.1007/s10584-006-9226-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Broderick, C., T. Matthews, R. L. Wilby, S. Bastola, and C. Murphy, 2016: Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods. Water Resour. Res., 52, 83438373, https://doi.org/10.1002/2016WR018850.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chadwick, R., I. Boutle, and G. Martin, 2013: Spatial patterns of precipitation change in CMIP5: Why the rich do not get richer in the tropics. J. Climate, 26, 38033822, https://doi.org/10.1175/JCLI-D-12-00543.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chowdhury, S., and A. Sharma, 2009: Long-range Niño-3.4 predictions using pairwise dynamic combinations of multiple models. J. Climate, 22, 793805, https://doi.org/10.1175/2008JCLI2210.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chowdhury, S., and A. Sharma, 2011: Global sea surface temperature forecasts using a pairwise dynamic combination approach. J. Climate, 24, 18691877, https://doi.org/10.1175/2010JCLI3632.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Clemen, R. T., and R. L. Winkler, 1986: Combining economic forecasts. J. Bus. Econ. Stat., 4, 3946.

  • Das Bhowmik, R., A. Sankarasubramanian, T. Sinha, J. Patskoski, G. Mahinthakumar, and K. E. Kunkel, 2017: Multivariate downscaling approach preserving cross correlations across climate variables for projecting hydrologic fluxes. J. Hydrometeor., 18, 21872204, https://doi.org/10.1175/JHM-D-16-0160.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DelSole, T., 2007: A Bayesian framework for multimodel regression. J. Climate, 20, 28102826, https://doi.org/10.1175/JCLI4179.1.

  • Demirel, M. C., and H. Moradkhani, 2016: Assessing the impact of CMIP5 climate multi-modeling on estimating the precipitation seasonality and timing. Climatic Change, 135, 357372, https://doi.org/10.1007/s10584-015-1559-z.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Devineni, N., and A. Sankarasubramanian, 2010a: Improved categorical winter precipitation forecasts through multimodel combinations of coupled GCMs. Geophys. Res. Lett., 37, L24704, https://doi.org/10.1029/2010GL044989.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Devineni, N., and A. Sankarasubramanian, 2010b: Improving the prediction of winter precipitation and temperature over the continental United States: Role of the ENSO state in developing multimodel combinations. Mon. Wea. Rev., 138, 24472468, https://doi.org/10.1175/2009MWR3112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fowler, H. J., S. Blenkinsop, and C. Tebaldi, 2007: Linking climate change modelling to impacts studies: Recent advances in downscaling techniques for hydrological modelling. Int. J. Climatol., 27, 15471578, https://doi.org/10.1002/joc.1556.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gangopadhyay, S., M. Clark, K. Werner, D. Brandon, and B. Rajagopalan, 2004: Effects of spatial and temporal aggregation on the accuracy of statistically downscaled precipitation estimates in the upper Colorado River basin. J. Hydrometeor., 5, 11921206, https://doi.org/10.1175/JHM-391.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., and Coauthors, 2001: Regional climate information: Evaluation and projections. Climate Change 2001: The Scientific Basis, J. T. Houghton et al., Eds., Cambridge University Press, 583–638.

  • Gleckler, P. J., K. E. Taylor, and C. Doutriaux, 2008: Performance metrics for climate models. J. Geophys. Res., 113, D06104, https://doi.org/10.1029/2007JD008972.

    • Search Google Scholar
    • Export Citation
  • Hagedorn, R., F. J. Doblas-Reyes, and T. N. Palmer, 2005: The rationale behind the success of multi-model ensembles in seasonal forecasting—I. Basic concept. Tellus, 57A, 219233, https://doi.org/10.1111/j.1600-0870.2005.00103.x.

    • Search Google Scholar
    • Export Citation
  • Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 10951107, https://doi.org/10.1175/2009BAMS2607.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hay, L. E., and M. P. Clark, 2003: Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States. J. Hydrol., 282, 5675, https://doi.org/10.1016/S0022-1694(03)00252-X.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • IPCC, 2013: Summary for policymakers. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 1–29, www.ipcc.ch/report/ar5/wg1.

  • Johnson, F., S. Westra, A. Sharma, and A. J. Pitman, 2011: An assessment of GCM skill in simulating persistence across multiple time scales. J. Climate, 24, 36093623, https://doi.org/10.1175/2011JCLI3732.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Karl, T. R., and W. J. Koss, 1984: Regional and national monthly, seasonal and annual temperature weighted by area 1895–1983. National Climatic Data Center Tech. Rep. Historical Climatology Series 4-3, 38 pp.

  • Khan, M. Z. K., R. Mehrotra, A. Sharma, and A. Sankarasubramanian, 2014: Global sea surface temperature forecasts using an improved multimodel approach. J. Climate, 27, 35053515, https://doi.org/10.1175/JCLI-D-13-00486.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2010: Challenges in combining projections from multiple climate models. J. Climate, 23, 27392758, https://doi.org/10.1175/2009JCLI3361.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knutti, R., D. Masson, and A. Gettelman, 2013: Climate model genealogy: Generation CMIP5 and how we got there. Geophys. Res. Lett., 40, 11941199, https://doi.org/10.1002/grl.50256.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Krishnamurti, T. N., C. M. Kishtawal, T. E. LaRow, D. R. Bachiochi, Z. Zhang, C. E. Williford, S. Gadgil, and S. Surendran, 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285, 15481550, https://doi.org/10.1126/science.285.5433.1548.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kumar, S., V. Merwade, J. L. Kinter III, and D. Niyogi, 2013: Evaluation of temperature and precipitation trends and long-term persistence in CMIP5 twentieth-century climate simulations. J. Climate, 26, 41684185, https://doi.org/10.1175/JCLI-D-12-00259.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leung, R. L., Y. Qian, X. Bian, and A. Hunt, 2003: Hydroclimate of the western United States based on observations and regional climate simulation of 1981–2000. Part II: Mesoscale ENSO anomalies. J. Climate, 16, 19121928, https://doi.org/10.1175/1520-0442(2003)016<1912:HOTWUS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, W., and A. Sankarasubramanian, 2012: Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination. Water Resour. Res., 48, W12516, https://doi.org/10.1029/2011WR011380.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mason, S. J., and G. M. Mimmack, 2002: Comparison of some statistical methods of probabilistic forecasting of ENSO. J. Climate, 15, 829, https://doi.org/10.1175/1520-0442(2002)015<0008:COSSMO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maurer, E. P., and H. G. Hidalgo, 2008: Utility of daily vs. monthly large-scale climate data: An intercomparison of two statistical downscaling methods. Hydrol. Earth Syst. Sci., 12, 551563, https://doi.org/10.5194/hess-12-551-2008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Maurer, E. P., A. W. Wood, J. C. Adam, D. P. Lettenmaier, and B. Nijssen, 2002: A long-term hydrologically based data set of land surface fluxes and states for the conterminous United States. J. Climate, 15, 32373251, https://doi.org/10.1175/1520-0442(2002)015<3237:ALTHBD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mehran, A., A. AghaKouchak, and T. J. Phillips, 2014: Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations. J. Geophys. Res. Atmos., 119, 16951707, https://doi.org/10.1002/2013JD021152.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mehrotra, R., and A. Sharma, 2012: An improved standardization procedure to remove systematic low frequency variability biases in GCM simulations. Water Resour. Res., 48, W12601, https://doi.org/10.1029/2012WR012446.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nahar, J., F. Johnson, and A. Sharma, 2017: Assessing the extent of non-stationary biases in GCMs. J. Hydrol., 549, 148162, https://doi.org/10.1016/j.jhydrol.2017.03.045.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nguyen, H., R. Mehrotra, and A. Sharma, 2016: Correcting for systematic biases in GCM simulations in the frequency domain. J. Hydrol., 538, 117126, https://doi.org/10.1016/j.jhydrol.2016.04.018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Plummer, N., and Coauthors, 1999: Changes in climate extremes over the Australian region and New Zealand during the twentieth century. Climatic Change, 42, 183202, https://doi.org/10.1023/A:1005472418209.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qin, N., X. Chen, G. Fu, J. Zhai, and X. Xue, 2010: Precipitation and temperature trends for the southwest China: 1960–2007. Hydrol. Processes, 24, 37333744, https://doi.org/10.1002/hyp.7792.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rajagopalan, B., U. Lall, and S. E. Zebiak, 2002: Categorical climate forecasts through regularization and optimal combination of multiple GCM ensembles. Mon. Wea. Rev., 130, 17921811, https://doi.org/10.1175/1520-0493(2002)130<1792:CCFTRA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Reichler, T., and J. Kim, 2008: How well do coupled models simulate today’s climate? Bull. Amer. Meteor. Soc., 89, 303311, https://doi.org/10.1175/BAMS-89-3-303.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Robertson, A. W., U. Lall, S. E. Zebiak, and L. Goddard, 2004: Improved combination of multiple atmospheric GCM ensembles for seasonal prediction. Mon. Wea. Rev., 132, 27322744, https://doi.org/10.1175/MWR2818.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sanderson, B. M., R. Knutti, and P. Caldwell, 2015: Addressing interdependency in a multimodel ensemble by interpolation of model properties. J. Climate, 28, 51505170, https://doi.org/10.1175/JCLI-D-14-00361.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sillmann, J., V. V. Kharin, X. Zhang, F. W. Zwiers, and D. Bronaugh, 2013: Climate extremes indices in the CMIP5 multimodel ensemble: Part 1. Model evaluation in the present climate. J. Geophys. Res. Atmos., 118, 17161733, https://doi.org/10.1002/jgrd.50203.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Steinschneider, S., R. McCrary, L. O. Mearns, and C. Brown, 2015: The effects of climate model similarity on probabilistic climate projections and the implications for local, risk-based adaptation planning. Geophys. Res. Lett., 42, 50145044, https://doi.org/10.1002/2015GL064529.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485498, https://doi.org/10.1175/BAMS-D-11-00094.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Timmermann, A., 2006: Forecast combinations. Handbook of Economic Forecasting, Vol. 1, G. Elliott, C. Granger, and A. Timmermann, Eds., Elsevier, 135–196.

    • Crossref
    • Export Citation
  • van Pelt, S. C., J. J. Beersma, T. A. Buishand, B. J. J. M. van den Hurk, and P. Kabat, 2012: Future changes in extreme precipitation in the Rhine basin based on global and regional climate model simulations. Hydrol. Earth Syst. Sci., 16, 45174530, https://doi.org/10.5194/hess-16-4517-2012.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Vrugt, J. A., and B. A. Robinson, 2007: Treatment of uncertainty using ensemble methods: Comparison of sequential data assimilation and Bayesian model averaging. Water Resour. Res., 43, W01411, https://doi.org/10.1029/2005WR004838.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wasko, C., A. Sharma, and P. Rasmussen, 2013: Improved spatial prediction: A combinatorial approach. Water Resour. Res., 49, 39273935, https://doi.org/10.1002/wrcr.20290.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weigel, A. P., R. Knutti, M. A Liniger, and C. Appenzeller, 2010: Risks of model weighting in multimodel climate projections. J. Climate, 23, 41754191, https://doi.org/10.1175/2010JCLI3594.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilby, R. L., C. W. Dawson, and E. M. Barrow, 2002: SDSM—A decision support tool for the assessment of regional climate change impacts. Environ. Modell. Software, 17, 145157, https://doi.org/10.1016/S1364-8152(01)00060-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, A. W., L. R. Leung, V. Sridhar, and D. P. Lettenmaier, 2004: Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Climatic Change, 62, 189216, https://doi.org/10.1023/B:CLIM.0000013685.99609.9e.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Flowchart of model development, validation, and application to future projections.

  • Fig. 2.

    (a) Calibration results showing the fraction of grid points where percentile-based optimal weighting performs better than equal weighting with lower values of RMSEs and (b) model weights from the performance-based optimal weighting scheme, averaged across all percentiles, with regard to k values.

  • Fig. 3.

    Model weights over three grid cells are shown (percentiles). The larger panels represent GCM weights from the percentile-based optimal weighting scheme, whereas the smaller panels are from the non-percentile-based optimal weighting scheme. Model weights are calculated for January.

  • Fig. 4.

    Grid cells where the first scheme for each panel exhibits a lower RMSE value compared to the second scheme are plotted. Colors represent the absolute difference (percent) in RMSE values from the two schemes. Shown are (a) RMSE (overall), (b) RMSE (lower), and (c) RMSE (upper). For example, the grid points in the left column and top row are the absolute difference in RMSEs (overall) between non-percentile-based optimal weighting and equal weighting. Results are for January.

  • Fig. 5.

    Nonexceedance probabilities of monthly precipitation for the model combination projection (black) and projections from 20 ensembles members (blue band) for the period 2000–49 and for the months of January and July. Nonexceedance probabilities are plotted for four NCDC climate regions: the northeastern (NE), central (C), southwestern (SW), and western (W) United States. Observed percentiles for the period 1950–99 are plotted as the red line. The percentile-based optimal combination approach is used.

  • Fig. 6.

    Grid cells exhibiting changes in the (a) 25th and (b) 75th percentile values between the periods 1950–99 and 2000–49. RCP8.5 is used for future projections.

  • Fig. 7.

    The percent difference between the model combined future projections (2000–49) from two approaches, equal weighting and percentile-based optimal weighting, plotted against percentiles. Results are shown for four NCDC climate regions and for January and July (as in Fig. 5). The percent difference is calculated as the deviation of the optimal weighting from the equal weighting.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 815 247 38
PDF Downloads 544 96 3