Future changes in monthly precipitation are typically evaluated by estimating the shift in the long-term mean/variability or based on the change in the marginal distribution. General circulation model (GCM) precipitation projections deviate across various models and emission scenarios and hence provide no consensus on the expected future change. The current study proposes a rank/percentile-based multimodel combination approach to account for the fact that alternate model projections do not share a common time indexing. The approach is evaluated using 10 GCM historical runs for the current period and is validated by comparing with two approaches: equal weighting and a non-percentile-based optimal weighting. The percentile-based optimal combination exhibits lower values of RMSE in estimating precipitation terciles. Future (2000–49) multimodel projections show that January and July precipitation exhibit an increase in simulated monthly extremes (25th and 75th percentiles) over many climate regions of the conterminous United States.
Future projections of the global climate are simulated by general circulation models (GCMs) given representative concentration pathways (RCPs) of carbon emissions (Taylor et al. 2012). Previous studies conclude that global mean surface air temperature has increased over the last century, and changes in precipitation are observed that are notable for increases in the magnitude, trend, and changes in the frequency of extreme events at regional and continental scales (Plummer et al. 1999; Alexander et al. 2006; Qin et al. 2010; IPCC 2013). Alternate GCMs may exhibit similar trends over long periods (e.g., decades), but different initializations, model structures, and parameterizations lead to significant differences in monthly simulated values from multiple models or even multiple ensemble members from the same model (Johnson et al. 2011; Das Bhowmik et al. 2017). For example, Das Bhowmik et al. (2017) showed that all three GCMs they examined (IPSL, CNRM, and MPI) performed poorly in estimating the cross-correlation between monthly precipitation and monthly average surface temperature; however IPSL performs slightly better than the other two GCMs. Future runs of GCMs predict different magnitudes of climate change compared to the twentieth-century climate.
Any structural bias that is present in current-day simulations may be different from the bias present in future projections (Nahar et al. 2017). Model combination studies typically assume the structural stationarity in the form of bias present. Hawkins and Sutton (2009) showed that model uncertainty is the dominant source of near-term uncertainty. It is common to expect that uncertainty in weather and climate forecasts is reduced based on a multimodel combination that evaluates an individual model performance by comparing it against observed records (Krishnamurti et al. 1999; Mason and Mimmack 2002; Rajagopalan et al. 2002). However, GCM historical runs and future projections under different RCPs do not exhibit temporal correspondence with observed records or with each other. Given this temporal mismatch, it is difficult to combine projections to reduce the uncertainty across models using conventional combination algorithms that have been adopted in developing multimodel weather and climate forecasts (Robertson et al. 2004; Chowdhury and Sharma 2009; Devineni and Sankarasubramanian 2010a,b). This study proposes an algorithm for combining multiple GCMs for developing climate change projections of monthly precipitation with reduced uncertainty. We propose a common rank indexing to relate all model runs, thereby allowing for a detailed characterization of the probability distribution of the variables of interest and, in particular, changes to extreme rainfall in future warmer climates. There are some similarities in our proposed multimodel combination approach to GCM downscaling algorithms (Giorgi et al. 2001; Hay and Clark 2003; Leung et al. 2003; Wood et al. 2004; Wilby et al. 2002; Gangopadhyay et al. 2004; Fowler et al. 2007; Maurer and Hidalgo 2008) in that both try to reduce bias and uncertainty in certain spatial scales, but our proposed methodology is distinct in that we are focused on finding the most reliable combinations of multiple GCMs. This approach has the advantage of leveraging predictions from multiple GCM realizations when developing a composite projection. A detailed discussion on various statistical downscaling approaches can be found in Das Bhowmik et al. 2017.
The easiest approach to reduce uncertainty in climate projections from multiple models is based on equal weighting, which assigns weights to the GCMs as the inverse of the total number of models (Hagedorn et al. 2005). Using a synthetic setup, Weigel et al. (2010) showed that optimal weighting performs better than equal weighting irrespective of the increase or decrease of the joint error fraction (which can be considered as the amount of correlated error) if the weights are properly estimated. Multimodel combination can be performed by incorporating numerous approaches such as simple regression (Krishnamurti et al. 1999), optimal weights based on long-term performance (Rajagopalan et al. 2002), statistical estimation of weights conditioned on the dominant prediction conditions (Devineni and Sankarasubramanian 2010a,b; Li and Sankarasubramanian 2012), dynamic pairwise weighting based on logistic regression (Chowdhury and Sharma 2009), and a Lagrangian approach to incorporate spatial covariance (Khan et al. 2014), to name a few. Multimodel combination approaches based on Bayesian statistics, such as objective Bayesian analysis, were also applied for model weighting (DelSole 2007). Vrugt and Robinson (2007) compared the Bayesian model averaging with the ensemble Kalman filter in the context of probabilistic streamflow forecasting. Models are weighted based on their prior performance (Rajagopalan et al. 2002) to incorporate the temporal variation of the component model skill. A recent study (Broderick et al. 2016) related to the transferability of hydrologic models between contrasting regimes reported that Bayesian model averaging (BMA) and Granger–Ramanathan averaging (GRA) outperformed the use of a simple arithmetic mean (SAM) and Akaike information criteria averaging (AICA). In dynamic model combination (Chowdhury and Sharma 2009; Devineni and Sankarasubramanian 2010a,b), model weights are allowed to change with respect to time. The dynamic combination approach has shown superiority in predicting SST and long-range Niño-3.4 over a static model combination (Chowdhury and Sharma 2009, 2011). Wasko et al. (2013) improved the spatial prediction of rainfall by combining a copula-based approach.
The current study aims to extend and modify the optimal model combination approach that has been applied in the context of weather and climate predictions, and streamflow forecasting, on climate projections. GCMs and regional climate models (RCMs) are the only reliable source to evaluate long-term changes in extreme events. Earlier studies (van Pelt et al. 2012; Beniston et al. 2007) considered an uncertainty envelope or equal weighting for the high-frequency behaviors of climate models in projecting extremes. A relatively recent study (Sanderson et al. 2015) proposed a method for combining model results into a single or multivariate distribution to account for the large degree of uncertainty across model precipitation amounts. One of the objectives of the current study is to reduce uncertainty in monthly climate change projections by developing a performance-based optimal weighting approach that combines asynchronous observations and model projections. The intent is to analyze the distributions of monthly multimodel projections for identifying the shift in extremes in the monthly climate over the conterminous United States (CONUS). It is proposed that the combination weights across multiple runs vary depending on percentile based on their ability to predict the observed precipitation at that percentile. In other words, models can be assigned a different weight when ascertaining the higher percentiles for a variable of interest, as compared to the lower percentiles. Percentile-based optimal weighting is validated by comparing this approach with equal weighting and non-percentile-based optimal weighting and finally by applying it to future projections. Shifts in the climate extremes for the future period compared to the historical are calculated by considering the 25th and 75th percentile values (or values lesser and greater than the 25th and 75th percentiles) of the monthly precipitation. We perform all assessments based on rigorous validation, leading to conclusions that are indicative of how each combination can be expected to perform in the future.
Monthly precipitation data are obtained from 20 ensemble members of 10 GCMs’ historical runs. Details of the GCMs are given in Table 1. Historical runs are cropped over the CONUS and regridded to 1° × 1° grid cells. Twentieth-century control runs are used for development and validation of the model combination approach, and emission scenario RCP8.5 is used for the future projections corresponding to the period 2000–49. The simulated dataset is recentered and rescaled [Eq. (1) in section 3a] based on the standardization procedure described in Mehrotra and Sharma (2012). The GCM historical runs are the part of the Coupled Model Intercomparison Project phase 5 (CMIP5).
The observed precipitation values at 1° × 1° grid cells are obtained from the United States Bureau of Reclamation (BOR) database (Maurer et al. 2002). We consider the dataset for the period 1950–99 for model development and validation.
Monthly precipitation values from 20 GCM ensemble members and the corresponding observed precipitation values for the period 1950–99 at each grid cell are first converted to the ranked space. The ranked dataset does not represent time dependence but allows a comparison conditioned on the pth percentile. Basically, for a given grid point, we sort precipitation values over T number of years for a given month and for a given model, and then the process is repeated for 20 members. As the aim of our study is to combine projections across multiple models (GCMs) to formulate a single probabilistic representation of the variable of interest, the approach adopted consists of two key steps. In the first step, a sample representing model deviations from the observed for a given percentile across all models of interest is constructed. In the second step, a combination algorithm that takes into account the joint dependence exhibited in that sample across all models is used for developing multimodel projections. The result is an optimally weighted combination that provides a unique multimodel projection for that percentile.
To construct the sample representing each percentile, a neighbor matrix having k neighbors representing the model deviations for the pth percentile is constructed. Model deviations vary with the choice of k neighbors, so we refer the model deviation conditioned on the choice of k as the “sample.” Model weights are then calculated by minimizing the expected forecast error variance with a constraint that forces the model weights to add up to one to ensure unbiased combined projections (Timmermann 2006). A weighted average of the model runs/projections for that percentile then forms the combined modeled output.
In the results that follow, the percentile-based optimal weighting scheme is compared with a non-percentile-based optimal weighting scheme (single optimal weight across all percentiles) and an equal weighting scheme (equal preference to all models). In these benchmark methods, weights are not conditioned on percentiles. For the equal weighting approach, model weights are inversely related to the number of GCMs, and for the non-percentile-based optimal weighting scheme we use the entire rank space to minimize the mean square error. During the specification of the model combination approach for a particular grid cell, precipitation projections and observed values from four nearby grid cells are included to create a sufficiently large sample size that stabilizes the estimation of the error variance–covariance matrix (Clemen and Winkler 1986) for the model combination and also to ensure a smoother representation of the combined output across space. We incorporated threefold cross-validation that divides data into three blocks and leaves out each block one at a time to ascertain a cross-validated outcome while the remaining blocks are used for the model development. To ensure that the present-day GCM runs (future projections) have the same (change) in the mean and standard deviation from the observed dataset, we first recenter and rescale the GCM runs using present-day climate. Thus, the present-day GCM run has the same mean and standard deviation as the observed, and the future GCM projections produce the climate change signal in the moments (Mehrotra and Sharma 2012). Model performance is assessed based on root-mean-square error (RMSE) for each tercile of the combined data. A detailed step-by-step discussion of the three approaches is presented below.
a. Multimodel algorithms
In this study, we consider a total of 10 GCMs resulting in a total of 20 ensemble members. For the purpose of the multimodel algorithm, each member is considered as a model, thereby providing M = 20 runs.
1) Equal weighting
Extract the calibration dataset for the GCM and observed precipitation and , respectively, at the time step t (t = 1, …, T), for a grid point i (i = 1, …, I), month j (j = 1, …, 12), and for the model m (m = 1, …, M).
Sort the vectors and separately in ascending order. The terms and are the GCM and the observed responses for the pth percentile (p = 0, …, 100), respectively.
2) Non-percentile-based optimal weighting
Repeat steps 1–4 from the previous algorithm to extract data, rescale, recenter, and sort the calibration dataset. The final product is the GCM response and observed response in the percentile space.
Estimate the error variance–covariance matrix .
- Solve for the objective function by maximizing with model weights that are subject to the equality constraints in Eq. (6): is the set of model weights, and is the model weight for the model m.
3) Percentile-based optimal weighting
Extract calibration dataset for the GCM and the observed precipitation and , respectively, at the time step t (t = 1, …, T), for grid point i (i = 1, …, I), month j (j = 1, …, 12), and model m (m = 1, …, M).
Enhance the size of and by including n grid points (we considered n = 4) from nearby grid cells. These cells are selected based on the geodetic distances. We do not worry about the time correspondence between the observed and the simulated, as we will be calculating the weights in percentile space. Dimensions of and are the same as in previous algorithms [T, 1] but T is now n times larger than earlier.
Sort the vectors and separately in ascending order. The terms and are the GCM and the observed responses for the pth percentile (p = 0, …, 100), respectively.
b. Application on validation dataset
This subsection discusses the extraction and model combination of validation data. Model weights are obtained from the model calibration and applied directly on the validation dataset or on future projections. We compare the performance of three multimodel combination approaches in reducing model uncertainty for the validation dataset, incorporating the observed monthly precipitation values. We apply the same weights on future projections for which the observed values are not available for obvious reasons. We first obtained the GCM projections and observed precipitation for the model development (including calibration and threefold validation) period (1950–99). Then, GCM projections were centered and scaled using the observed moments from the calibration dataset. Thus, the process retains the nonstationarity involved in the future projections. The following steps can be applied to future and validation projections of GCMs.
- Extract validation dataset for the GCM and the observed precipitation and , respectively, at the time step t (t = 1, …, T2), for grid point i (i = 1, …, I), month j (j = 1, …, 12), and model m (m = 1, …, M). , and are the vectors of the model projection and the observed, respectively. First, apply the bias correction:
Sort the vectors and in ascending order. The terms and are the GCM and the observed responses for the pth percentile (p = 0, …, 100), respectively.
Repeat steps 1and 2 for all models (m = 1, …, M)
c. Model development
In this section, we discuss the details regarding model development and the application of the model weights for future projections. Details of the model development and validation steps are provided in Fig. 1. Both the GCM and the observed dataset from the current period are considered for the model development and validation. We divided the temporal dimension of monthly precipitation at grid point i and for month j into two parts T1 and T2, where T1 and T2 cumulatively represent the entire observed time period T; T1 is used for the model development, while T2 is left for the validation. The validation process that we employ is called threefold validation (Nguyen et al. 2016).We used two-thirds of the dataset for the model development and the rest for validation. T1, the calibration dataset, was chosen three alternative ways, using the first, last, and middle two-thirds of the entire dataset. The validation can also be executed for the same grid point and month but with three nonoverlapping datasets.
Given that we have 50 years of data, calibration blocks consist of 34 years of data for the equal weighting and non-percentile-based optimal weighting. However, for the percentile-based optimal weighting, we have increased the number of grid points to 200 by sampling neighboring grid points. Out of the 200 grid points, a total of 136 observations are used for the calibration in the latter case. Once the calibration and validation dataset are identified, we applied the three models on the dataset and determined the weights. The weights were retained to apply in the validation and future projections. For a particular approach, we applied the model weights on each validation block. To compare the performance of the three approaches, we divided the validation block into upper, lower, and middle terciles for evaluation. Monthly precipitation values conditioned from the zeroth to 33rd and from the 66th to 100th percentiles represent the lower tercile and upper tercile, respectively, and the entire rank space is designated as the overall percentile. RMSE values for the validation approach are calculated for each tercile. The idea is to understand the efficiency of a model combination approach in projecting monthly precipitation values conditioned on various percentiles within the validation block. RMSE values for a particular tercile are averaged across the three validation datasets.
d. Sensitivity analysis
A fixed value of k, which is the number of neighboring grid points for stabilizing the percentile estimation based on the error variance–covariance matrix, is chosen by performing a detailed sensitivity analysis. Figure 2a shows the fraction of total grid points where percentile-based equal weighting performs better than the equal weighting with varying values of k. The analysis is conducted with the calibration dataset. We calculated the RMSEs in estimating the upper, lower, and overall terciles of the calibration dataset. As k increases beyond 50, optimal weighting performs better than equal weighting in reducing the uncertainty for all terciles. Optimal weighting exhibits better performance in projecting the upper extremes of precipitation (percentiles > 66) than the lower extremes (percentiles < 33). Figure 2b shows the model weights, averaged over all percentiles and all grid points over the CONUS, for different k values. Model weights are stabilized as k tends toward 50, and one or two GCMs evolve apparently as superior to the other GCMs (e.g., CESM-CAM5 in Fig. 2b). We decided to employ a k value of 60 to conduct further analysis. An appropriate number (k = 60), bigger than the number of observations on a grid cell, stabilizes the matrix, which justifies our decision to include nearby grid points. The number is similar to that reported by Khan et al. (2014) in reducing the uncertainty in sea surface temperature forecasts.
a. Distribution of optimal weights
Three schemes (percentile-based optimal weighting, non-percentile-based optimal weighting, and equal weighting) are compared based on the reduction in RMSE values during validation. RMSE is calculated for three subsets of validation dataset: all percentiles and upper and lower terciles [henceforth referred to as RMSE (overall/upper/lower)]. We expect the percentile-based GCM weights to either vary or remain constant across percentiles. Figure 3 shows GCM weights across percentiles over three grid cells from the model development period. The larger panels are for the percentile-based scheme whereas the smaller panels are for the non-percentile-based scheme. The weight for each model is equal to 0.05 in the equal weighting approach as the total number of GCMs is 20. However, GCM weights for all three model combination schemes are different for different grid cells and months. Representative grid cells are randomly selected from the eastern, western, and central parts of the CONUS, and GCM weights are plotted for the month of January.
From Fig. 3, ACCESS1.3 and CNRM-CM5 perform better over grid cell 1 with the percentile-based scheme but for the non-percentile-based scheme ACCESS1.3 and CCSM5 are superior to other GCMs in the same grid cell. Similar observations are found for the other two grid cells also. We conclude that the GCM weights conditioned on percentiles differ from the GCM weights under the non-percentile-based approach, implying different capabilities of alternate GCMs in predicting precipitation of different magnitudes. Table 2 provides the RMSEs for three multimodel combination schemes for three grid cells for the validation period. The percentile- and non-percentile-based optimal weighting schemes both perform better than the equal weighting approach. For grid cell 1, percentile-based optimal weighting has a higher RMSE (upper) value than the non-percentile-based scheme, whereas for grid cell 2 the non-percentile-based scheme performs better with a reduced value of RMSE (lower). For all other cases, percentile-based optimal weighting has a lower RMSE value compared to non-percentile-based optimal weighting.
b. Performance under validation
We extended the comparison between the three schemes for the CONUS by calculating RMSEs for all grid points. Figure 4 shows the grid cells that experience improvements incorporating optimal weighting for January. The absolute difference of RMSEs (Figs. 4a–c indicate overall, lower, and upper, respectively) between two schemes is marked with color. Optimal weighting schemes exhibit higher differences in RMSE at the lowest tercile compared to other terciles. Equal weighting can reduce the uncertainty in the median of the projection. Hence the reduction in RMSE (overall) is not high as illustrated in Figs. 4b and 4c. Grid cells with improved RMSE values are scattered all over the CONUS without forming significant clusters. However, the RMSE is lower over the southern and western United States. Comparing the two optimal combination methods, the percentile-based optimal weighting scheme performs better on most of the grid points for RMSE (overall) and RMSE (lower), whereas non-percentile-based weighting shows superiority over the percentile-based scheme for RMSE (upper). We compared the three approaches with each other by calculating the fraction of total grid points in which one approach exhibits a lower value of RMSE compared to the others.
The results for January are summarized in Table 3. Optimal model combination schemes show better performance than the equal weighting for more than 55% of the grid cells. The non-percentile-based optimal weighting scheme shows improved performance compared to equal weighting over 56% of the total grid cells. However, the differences in performance between percentile- and non-percentile-based optimal schemes are slim. Non-percentile-based schemes perform better over 60% of the grid cells in reducing uncertainty in projecting the upper terciles. However, the percentile-based scheme exhibits either the same or an improved performance in the case of overall or lower terciles. For the upper terciles, the non-percentile-based scheme performs better than the percentile-based scheme because the percentile-based model weights reach a limiting state when the weights stop changing across percentiles. The sampling of neighboring grid points to attain a k value for the analysis also has the potential to reduce the variance of precipitation values within the upper terciles.
c. Multimodel future projections
One major advantage of the percentile-based optimal weighting scheme is that weights do not vary depending on the time period of the prediction considered. GCM weights obtained from the calibration period, similar to the validation period, are next applied on the rank space of GCM projections for 2000–49 over the CONUS.
Percentile values of future monthly precipitation for January and July months are averaged over the selected NCDC (now known as NCEI) climate regions (Fig. 5). NCDC climate regions are considered for understanding current climate anomalies in a historical perspective (Karl and Koss 1984). The confidence bound represents 20 ensemble members, and the red and black lines are the observed (1950–99) and multimodel combined projections, respectively. Multimodel combined percentiles are within the confidence bound represented from the combination of 20 GCMs. During January, an upward shift in the winter precipitation compared to the earlier period confirms the increase in the extremes over the selected climate regions. Over the central United States, changes in precipitation during July are not very high compared to the second half of the twentieth century. Monthly precipitation for all percentiles increases for the northeastern United States during July and January. However, the amount of increase during January is higher for the middle terciles. Both months experience a shift in monthly precipitation for the upper terciles over the southwest United States. Over the western United States, the monthly precipitation shift is higher during January compared to July.
Figures 6a and 6b show changes in the 25th and 75th percentiles between two periods, 1950–99 and 2000–49. Winter extreme precipitation (75th percentile and higher) increases by more than 50 mm month−1 compared to the earlier period over the Frost Belt of the United States. During winter, both the 25th and 75th percentiles of precipitation experience increased values. Results from July are mixed and typical of declines in precipitation extremes by 20–30 mm month−1. However, the central United States exhibits an increase in extreme precipitation for July.
We attempted to understand the extent of the difference between equal weighting and optimal weighting (percentile based) in projecting future monthly precipitation. The difference is calculated by subtracting the equal weighting value from the percentile-based optimal weighting, and it is expressed as the percentage change on the equally combined projection. Results for four regions and two months are shown in Fig. 7.
Multimodel projected monthly precipitation amounts differ between the equal weighting and the percentile-based optimal weighting by 10%–15%. During both months, the difference is higher typically for the lower terciles. The model combined lower tercile values from the optimal weighting are greater than those of the equal weighting during January by 8% over the central and northeast United States. In contrast, the underestimation of the lower terciles from the optimal weighting during July is high over the Southwest and the West. However, for the upper terciles, the differences between the two approaches are slim.
A multimodel combination approach based on GCM performance is proposed to reduce the uncertainty in climate extreme projections. It is hypothesized that the nature of the model combination should vary depending on the relative magnitude of modeled precipitation, instead of assuming that the combination is a function of the number of models alone. Model weights, calculated from the approach, are conditioned on percentiles. Different models receive different weights at different percentiles when assessed in a validation setting, implying that the advantages of each model in the tail of the distribution are better exploited by this approach. The approach is applied to precipitation projections of 20 GCM ensemble members over the CONUS. On each grid point, the approach is calibrated by allowing neighboring grid points and validated by comparing this approach with the equal weighting and non-percentile-based optimal combination approaches. The study shows an improvement in the percentile-based approach over the other two and markedly different weights (equivalent here to model preferences) depending on whether low or high percentiles are of interest.
Optimal combination techniques perform better than the equal weighting approach for error measures across all terciles, whereas the equal weighting scheme inherently performs better in projecting the medians in precipitation projections. The percentile-based optimal weighting scheme is found to improve performance (in validation) over the non-percentile-based varying scheme comprehensively across the lower and middle terciles, with potential for greater improvements with the availability of additional data or more sophisticated procedures for pooling across space.
Finally, model weights, calculated during calibration, are applied on the future projection for the period 2000–49. The central, southwestern, and western United States experience increases in the upper percentiles of monthly precipitation during the winter months. Multimodel projected winter precipitation exhibits an expected increase with an amount up to 50 mm month−1, but expected changes in precipitation for the summer and fall months are small. The changes in extreme monthly precipitation, reported by the current study, cannot be verified because of the absence of observation data for the future period. However, the reported future changes are reliable as the optimal model combination is substantially validated with a historical dataset. Further, the future changes projected by the optimal combination are compared with the changes exhibited by equal weighting. The difference between the equally combined model projected values and optimally combined model projected values are typically small (within ±10%) over different climate regions of the United States.
By design, the percentile-based approach for model combination should outperform the non-percentile-based approaches when a clear preference exists for one or more models at one end of the probability distribution. Modeled precipitation attributes are known to be sensitive to both the physics parameterizations used and the solution by which the model balances fluxes. (Johnson et al. 2011). Furthermore, certain models are known to overestimate low rainfall compared to others (the so-called drizzle effect), with the ones doing so (Demirel and Moradkhani 2016) shifting the probability distribution of the simulated precipitation to the lower end. Consequently, there is reason to suspect that the percentile-based approach for model combination has merit, as long as the models being combined have varying abilities and characteristics (Hagedorn et al. 2005; Gleckler et al. 2008; Reichler and Kim 2008; Knutti et al. 2013), exhibiting biases that differ from each other in magnitude, form, and spatiotemporal extent (Knutti et al. 2010).
No single model emerged that has dominance over other models on most of the grid points over the CONUS, which restricts us from interpreting the physical significance of model weights. However, BCC-CSM and ACCESS emerge as two dominant GCMs on many grid points. The proposed optimal combination approach considers treating ensemble members individually, resulting in an increased number of ensemble members available for model combination. The availability of more ensemble members increases (reduces) the weight to be put on the best (worst) performing members under optimal combination, thereby reducing the estimates of internal variability associated with the climatic attribute (i.e., unpredictable noise). This is consistent with Weigel et al. (2010; see page 4186 under remark 3 in section 3c therein), who discussed how increasing the ensemble members could reduce the unpredictable noise with optimal combination giving higher weights for the best performing models. Steinschneider et al. (2015) reported that ignoring intermodel correlation on a regional scale can underestimate the variance of climate change. The current study converts the projected/simulated monthly precipitation into the rank space, and hence the intermodel correlation will always be one. We expected that the proposed approach should distinguish correlated models (in the original space) by assigning higher weights to one of the correlated models while leaving the rest with minimal or no weights. Many studies have analyzed CMIP5 projections in analyzing future changes in precipitation at different temporal and spatial scales (e.g., Chadwick et al. 2013; Kumar et al. 2013; Sillmann et al. 2013; Mehran et al. 2014). Sillmann et al. (2013) also found that BCC-CSM and ACCESS, the two models that exhibit superiority on multiple grid points in our current study, exhibited consistency in estimating precipitation indices across reanalysis. There is a spatial heterogeneity in the reliability of GCMs for predicting climate extremes over the CONUS. However, it was beyond the scope of the current study to evaluate the reasons related to the reliable predictions from BCC-CSM and ACCESS. While the current study used the continental United States as the focus domain, one should expect the advantages of the percentile-based approach to become even more evident as the assessment is extended to other regions. Similarly, given the added complexity that a percentile-based optimization brings into the equation, the application with longer observational datasets or better designed ways of pooling information across space should make the estimation of the covariance matrices more stable, imparting greater consistency and accuracy to the resulting projections.
This research is supported in part by the National Science Foundation under Grants CBET-1204368 and CBET-0954405. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table 1) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led the development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. The “Downscaled CMIP3 and CMIP5 Climate and Hydrology Projections” archive is available at http://gdo-dcp.ucllnl.org/downscaled_cmip_projections/.