Quantitative assessment of climate change risk requires a method for constructing probabilistic time series of changes in physical climate parameters. Here, two such methods, surrogate/model mixed ensemble (SMME) and Monte Carlo pattern/residual (MCPR), are developed and then are applied to construct joint probability density functions (PDFs) of temperature and precipitation change over the twenty-first century for every county in the United States. Both methods produce likely (67% probability) temperature and precipitation projections that are consistent with the Intergovernmental Panel on Climate Change’s interpretation of an equal-weighted Coupled Model Intercomparison Project phase 5 (CMIP5) ensemble but also provide full PDFs that include tail estimates. For example, both methods indicate that, under “Representative Concentration Pathway” 8.5, there is a 5% chance that the contiguous United States could warm by at least 8°C between 1981–2010 and 2080–99. Variance decomposition of SMME and MCPR projections indicates that background variability dominates uncertainty in the early twenty-first century whereas forcing-driven changes emerge in the second half of the twenty-first century. By separating CMIP5 projections into unforced and forced components using linear regression, these methods generate estimates of unforced variability from existing CMIP5 projections without requiring the computationally expensive use of multiple realizations of a single GCM.
The risk of an adverse event is characterized by its probability and its consequences (Kaplan and Garrick 1981). Risk analysis thus requires consideration of the probabilities and consequences of as full a range of possible outcomes as possible, including “tail risks” that have low probability but high consequence. For assessments of the local and regional risks of climate change, this requirement poses two major challenges. First, ensembles of coupled atmosphere–ocean general circulation models (GCMs) and Earth system models (ESMs), such as those in the archive for phase 5 of the Coupled Model Intercomparison Project (CMIP5; Taylor et al. 2012), are not probability distributions and were not designed to consider all sources of projection uncertainty. CMIP5 model ensembles are “ensembles of opportunity,” arbitrarily compiled on the basis of modeling-center participation. Sampling from such a distribution by assigning equal probability to all models may therefore yield a biased outcome (Tebaldi and Knutti 2007). Second, GCMs and ESMs may underestimate the probability of extreme climate outcomes. For example, the range of equilibrium climate sensitivity (ECS) in CMIP5 is 2.1°–4.7°C per doubling of carbon dioxide (CO2) concentrations (Flato et al. 2013, their Table 9.5), whereas observational and other non-GCM constraints allow ~17% probability of values exceeding 4.5°C (Collins et al. 2013). Simply weighting individual GCMs in a multimodel ensemble will not produce such extreme behavior if it is not simulated. Quantitative risk analysis that leverages the detailed physical projections that are produced by GCMs therefore requires methods that 1) assign probability weights to projected changes and 2) account for tail risks that are not captured by the physical models.
In this study, we develop two such methods and demonstrate them by producing county-level projections of twenty-first-century changes in temperature and precipitation in the United States. The first method, surrogate/model mixed ensemble (SMME), uses probabilistic simple climate model (SCM) projections of global mean temperature change to weight GCM output and to inform the construction of model surrogates to cover the tails of the SCM probability distribution that are missing from the GCM ensemble. The second method, Monte Carlo pattern/residual (MCPR), decomposes GCM output into forced climate change and unforced climate variability, uses SCM temperature projections to scale patterns of forced change, and then adds unforced variability. The SMME projections presented here were recently applied in a quantitative analysis of some of the economic risks that climate change poses to the United States (Houser et al. 2015; Rasmussen and Kopp 2015).
Perturbed-physics ensembles (e.g., Stainforth et al. 2005) can produce PDFs of future climate through sampling projection uncertainty originating from model parameters, but this approach requires enormous computing resources. However, SCMs [e.g., the Model for the Assessment of Greenhouse Gas-Induced Climate Change (MAGICC; Meinshausen et al. (2011a)] can be run in a probabilistic fashion on a desktop computer, sampling the range of parametric uncertainty consistent with both historical observations and expert judgment of parameters such as climate sensitivity. In addition, MAGICC has shown to emulate well the global mean temperature from GCMs over multiple emissions scenarios (e.g., Rogelj et al. 2012), ensuring that SCM-generated PDFs encompass both the spread of results from key variables in the CMIP5 archive and global mean temperature pathways not simulated in complex models.
Model surrogates used to cover the tails of the PDF must spatially resolve local projections of climate change under global temperature pathways that are not present in GCMs. Pattern scaling applies a linear relationship between changes in local climate variables and coincident changes in global temperature (i.e., patterns) produced by GCMs with a scalar (time-evolving global mean temperature) to generate projections under alternative global temperature pathways that would otherwise require a GCM to simulate them (Santer et al. 1990; Mitchell 2003; Moss et al. 2010). Moreover, the same linear regression used for pattern scaling can facilitate uncertainty quantification. If projections from a GCM are considered as the sum of forced and unforced climate variability, linear regression can disentangle these components (e.g., Sutton et al. 2015), with the forced signal estimated as the linear trend and the residuals representing a first-order approximation of unforced variability. Whereas conventional pattern-scaling approaches discard the latter, because they are assumed to be uncorrelated with global mean temperature, we retain these to assess the projection uncertainty associated with unforced variability and to compare with estimates from computationally expensive multimember initial-condition ensembles (e.g., Kay et al. 2015; Deser et al. 2014).
In section 2 of this paper, we first present an a priori comparison of the approaches and then detail the methods. In section 3, we identify sources of agreement and disagreement for temperature and precipitation results from an equal-weighted GCM ensemble, SMME projections, and MCPR projections and examine their uncertainties. In section 4, we consider the implications of these comparisons for the application of the two methods. We summarize the main findings and state conclusions in section 5. Additional tables and figures and further details on methods are available in the online supplement to this article. All daily projections compiled in this analysis are freely available online (http://dx.doi.org/doi:10.7282/T3SF2Z93).
General overviews of both probabilistic methods are shown in Fig. 1. In each case, we start with an estimated probability distribution of global mean temperatures over time from an SCM. For the SMME method (Fig. 1a), we use SCM projections of temperature change over the twenty-first century to weight GCM projections of monthly temperature and precipitation that have been bias corrected and downscaled using the bias-corrected spatial disaggregation (BCSD; see Appendix A that is available in the online supplemental material) method (Brekke et al. 2014) and “surrogate” models that are employed to ensure that the tails of the probability distribution are represented. For the MCPR method (Fig. 1b), the pathways of temperature change projected by the SCM are combined with randomly selected patterns of forced change and residuals of unforced variability from the downscaled CMIP5 models.
In an a priori comparison of the two approaches, we note three potentially important differences between SMME and MCPR. First, within the range of global temperatures for which CMIP5 output is available, the SMME approach allows for more-complex, nonlinear relationships between global temperature and regional forced change than does the MCPR method, which assumes a constant relationship reflected by the patterns. Second, the patterns and residuals employed in the two approaches are selected differently. The SMME method requires ad hoc selection of the patterns used to create surrogate models, whereas MCPR applies a consistent algorithmic method to generate all output. Furthermore, the SMME method retains a pairing between patterns and the residual, but the MCPR approach assumes that patterns and residuals are statistically independent of one another, which is unlikely to be strictly true. The MCPR method assumes that all patterns and residuals are equally likely. In the SMME technique, the patterns and residuals of models associated with higher-probability global temperature projections have greater weight. Third, the SMME method uses the SCM global mean temperature change for 2080–99 as the target for the probability distribution but may deviate from the SCM distribution at other time points. For the long-term change in global mean temperature, the MCPR approach will always match the SCM distribution because all patterns perfectly track a specific quantile of the SCM global mean temperature.
a. Concentration pathways
We incorporate radiative forcing projections from all four “Representative Concentration Pathways” (RCPs): RCP 2.6, RCP 4.5, RCP 6.0, and RCP 8.5 (van Vuuren et al. 2011). Each RCP represents a greenhouse gas concentration pathway and does not necessarily reflect socioeconomic and/or policy scenarios. A set of socioeconomic projections, unaccompanied by climate policy [the “Shared Socioeconomic Pathways” (SSPs)], has recently been constructed, however, and the radiative forcings of these no-policy projections can be compared with those in the RCPs (O’Neill et al. 2014, 2016; Riahi 2013). The lowest-emissions, “sustainable growth” SSP (SSP 1) has radiative forcing that is comparable to RCP 6.0, and the highest SSP is comparable to or slightly higher than RCP 8.5, which can be interpreted as a high-emissions, business-as-usual scenario. RCP 4.5 is consistent with moderate greenhouse gas emission reductions, and RCP 2.6 is a strong mitigation scenario. Notably, RCP 6.0 has the second lowest CO2-equivalent emissions total prior to 2050 (Meinshausen et al. 2011b, their Fig. 3g). As a consequence, mean global temperature projections from RCP 6.0 do not exceed those of RCP 4.5 until the third quarter of the twenty-first century.
b. Global mean temperature
Global mean temperature projections were produced as described in Rasmussen and Kopp (2015), and the description herein is modified from that work. Projections of global mean temperature for the four RCPs are calculated using MAGICC6 (Meinshausen et al. 2011a) in probabilistic mode. MAGICC6 is an SCM that represents hemispherically averaged atmosphere and ocean temperature and the globally averaged carbon cycle. MAGICC6 does not simulate internal climate variability or precipitation, both of which require more complex models. The distribution of input parameters for MAGICC6 that we employ has been constructed from a Bayesian analysis that is based upon historical observations of hemispheric land and ocean surface air temperature, ocean heat content, estimates of radiative forcing (Meinshausen et al. 2009), and the ECS probability distribution from the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) (Collins et al. 2013) (see Fig. A1 in the online supplemental material). The probability distribution of climate sensitivity from AR5 is based on several lines of information. Evidence from observational, paleoclimatic, and feedback analyses indicates 5th, 17th, and 83rd percentiles of 1.0°, 1.5°, and 4.5°C, respectively. In addition, evidence from climate models suggests a 90th percentile of 6.0°C (Collins et al. 2013). The differences in climate sensitivity between MAGICC6 and AR5 in part reflect sampling and the constraints needed to fit historical observations within the MAGICC6 model structure. The tails of the global mean temperature distribution are vulnerable to extreme scenarios produced by MAGICC6 and are not robust. Extreme outcomes, such as the 99th percentile, exceed the capabilities of the simple model and are not presented.
For each RCP, we used 600 model runs of MAGICC6. The 5th, 7th, 83rd, and 90th percentiles of ECS for these 600 runs are 1.5°, 1.6°, 4.9°, and 5.9°C per CO2 doubling, respectively. From the MAGICC6 projections, RCP 2.6, RCP4.5, RCP 6.0, and RCP 8.5 yield likely (67% probability) global mean temperature increases in 2080–99 above 1981–2010 levels of 0.5°–1.4°, 1.1°–2.5°, 1.5°–3.0°, and 2.6°–4.9°C, respectively (see Table A1 in the online supplemental material). Because of underlying structural uncertainties in the simple climate model, a sample size beyond the 600 runs does not yield much additional precision (Fig. A2 in the online supplemental material).
c. Pattern fitting
The pattern-fitting method is described in Rasmussen and Kopp (2015), and the description in this paragraph parallels that therein. Assuming that forced climate change can be approximated as linear in the long-term (30 yr) running average of global mean temperature, for each CMIP5 model and scenario i and each at station j, we fit the deviation from the 1981–2010 reference levels for seasonal temperature and precipitation to the linear model
following Rasmussen and Kopp (2015) and Mitchell (2003). Here, ΔT is the running-average change in global mean temperature relative to the reference period (1981–2010), is the estimated seasonal pattern, ΔT is the estimated forced climate change, bi,j is the observed historical mean, and ε(t) is an estimated temporal pattern of unforced variability. As an example, Fig. 2 shows a regression for the GFDL CM3 (Griffies et al. 2011; model acronym definitions are available at http://www.ametsoc.org/PubsAcronymList) for the grid cell containing New York, New York, for both summertime monthly mean temperature and precipitation rate (RCP 8.5). For local precipitation patterns, unforced variability is greater, and there is a weaker correlation with global mean temperature.
We use a single realization from each CMIP5 model; note that, for models for which multiple realizations are available, fitting the output from additional model realizations could more tightly constrain the separation into both forced changes and unforced variability and could allow for alternative approaches in which is not constant with temperature. Other approaches might also include additional covariates, such as aerosol emissions that can modify the patterns (e.g., Frieler et al. 2012). Maps of each model’s annually derived temperature and precipitation patterns for the contiguous United States (CONUS) are shown in the online supplemental material (Figs. A3 and A4). For temperature, most models have similar patterns. Larger intermodel differences for precipitation have been suggested to originate from large background variations in precipitation that mask the forced signal (Tebaldi et al. 2011; Hawkins and Sutton 2011).
d. Probability weighting
1) Equal-weighted CMIP5 ensemble
As a baseline for comparing the SMME and MCPR projections, we employ an equal-weighted CMIP5 ensemble. In interpreting this ensemble, we follow the approach of the IPCC. In particular, we note that, while in IPCC terminology the phrases very likely and likely bracket the 5th–95th percentiles and the 17th–83rd percentiles outcomes, respectively, the IPCC AR5 uses the 5th–95th-percentile range of long-term temperature change as projected by CMIP5 to bound the likely outcomes (see Collins et al. 2013, section 184.108.40.206). The underlying judgment that the CMIP5 archive does not adequately represent the tails of projected future temperature change is based upon the observation that the likely (17th–83rd) range of the transient climate response (TCR; Cubasch et al. 2001), inferred from multiple lines of evidence, corresponds to the 5th–95th-percentile range of the TCR from the CMIP5 models (Collins et al. 2013), as well as more general informal expert assessment of confidence in GCM projections. We consequently compare the 5th–95th percentile of temperature projections from the equal-weighted CMIP5 ensemble with the 17th–83rd percentile of the probability distributions from the SMME and MCPR methods. For precipitation projections, we do not make such an adjustment.
2) Surrogate/model mixed ensemble method
The SMME method was used in Houser et al. (2015) and originally described in Rasmussen and Kopp (2015); the description here parallels that therein. First, the unit interval [0, 1] is divided into 10 bins, but not of equal width; the tails of the intervals are allocated more bins to ensure sampling that was not captured by the CMIP5 models. The bins are centered at the 4th, 10th, 16th, 30th, 50th, 70th, 84th, 90th, 94th, and 99th percentiles. The bounds and center of each bin are assigned corresponding quantiles of global mean temperature from the MAGICC6 output; likewise, the CMIP5 global mean temperature is placed into bins on the basis of the projected change in global mean temperature from 1981–2010 to 2080–99.
If there are fewer than two CMIP5 models in a bin, model surrogates are produced to raise the total number of models and surrogates to two. Model surrogates are generated by taking the MAGICC6 projected annual global mean temperature time series that corresponds to the bin’s middle quantile. In the case in which there is no CMIP5 output available in the bin, two models are selected that have global mean temperature projections that are close to the bin and, where possible, one model pattern has a net increase in CONUS precipitation and one has a net decrease (or lesser increase) in CONUS precipitation. For bins with a single CMIP5 model, a model is selected with a precipitation pattern that is either identical or complementary to the one in the bin. Last, the patterns from the selected models are scaled by the global mean temperature projection and the same model’s residuals are added, creating a surrogate model that includes both forced change and unforced variability. Tables 1–4 list the models used to generate each pattern as well as their respective global mean temperature bin assignments.
The models and surrogates in the final probability distribution are weighted equally in each bin such that the total weight of the bin corresponds to the target distribution for 2080–99 temperature. For instance, if four models are in the bin centered at the 30th percentile, bounded by the 20th–40th percentiles, each will be assigned a probability of 20%/4 = 5%. Thus, the projected distribution for global mean temperature approximates the target (see Fig. 3 and also Fig. A6 in the online supplemental material).
3) Monte Carlo pattern/residual method
In the Monte Carlo pattern/residual (MCPR) method, we use the CMIP5 output as a source of patterns and residuals but do not directly retain any model output. Instead, we divide the unit interval [0, 1] into 100 equal bins and take the quantile of MAGICC6 global mean temperature projections corresponding to the center of each bin (i.e., the 0.5th, 1.5th, 2.5th, etc., percentiles). We generate a pool of candidate patterns by replicating the list of patterns a sufficient number of times to meet or exceed the number needed and then sample without replacement from the pool to assign a pattern to each bin. We sample without replacement from an identical pool to assign a residual time series to each bin. We then use the global mean temperature projection, the pattern, and the residual time series to generate a projection for each bin. Each projection is of equal probability, but patterns and/or residuals could be alternatively weighted (historical performance, pattern accuracy in reproducing GCM results, etc.). The identical pairs of CMIP5 patterns and residuals are assigned to each bin to project both temperature and precipitation.
e. Daily climate projections
1) Global climate model output
Although state-of-the-art GCMs can achieve resolutions of ~50 km × 50 km (e.g., Delworth et al. 2012), GCM results from CMIP5 are calculated with horizontal resolutions that are too coarse [i.e., ~1°–2°; see Flato et al. (2013, their Table 9.1)] to attempt to characterize vulnerabilities and impacts at the county level. In addition, GCM projections directly from the CMIP5 repository contain systematic model biases that must be corrected before being employed to address climate impacts (Ho et al. 2012). Projections of average temperature, minimum daily temperature, maximum daily temperature, and precipitation (all monthly averaged) were obtained from a BCSD archive derived from selected CMIP5 models (Brekke et al. 2014). An inventory of the models used in this study with each RCP is shown in the online supplemental material (Table A2). For the CONUS, projections are disaggregated to ⅛° × ⅛° (~14 km) horizontal resolution, whereas ½° × ½° (~56 km) model output with global coverage over land only is used to provide projections for Alaska and Hawaii.
As with all climate downscaling techniques, providing climate variables at higher spatial and temporal resolutions does not necessarily make projections any more reliable than raw results from the underlying GCM. More detail is not always indicative of superior information. Statistical downscaling techniques assume that current relationships between local and large-scale climate variables will remain the same in future climates, which may or may not be strictly true. Investigators should ideally employ multiple statistical downscaling methods to gauge the uncertainties associated with their method. Various approaches are given in the literature (e.g., Stoner et al. 2013; Mahmood and Babel 2013; Pierce et al. 2014; McGinnis et al. 2015).
Starting with the BCSD model projections, we apply the delta method (Ramirez-Villegas and Jarvis 2010) and then map and add the anomalies to observed temperature and precipitation normals (1981–2010) at stations from the Global Historical Climatology Network (GHCN; Arguez et al. 2012; http://go.usa.gov/KmqH). Geographic county centroids are then mapped to the nearest GHCN station [see section 2e(2)]. The GHCN measures and records daily meteorological variables worldwide and is the most comprehensive set of climate data within the United States. Station-level data take into account local meteorological phenomena, such as the urban heat island and land–sea interaction, neither of which is reproduced well by the gridded model output. Only GHCN stations that met the two strictest National Climatic Data Center data-completion requirements for the definition of 30-yr monthly climate normals were used. Those are stations with 1) complete records and/or 2) no more than five years missing and no more than three consecutive years missing among sufficiently complete years.
2) County–weather station mapping
Since measured variables differ by station and because record lengths and data completeness may also vary, different GHCN station mappings exist for temperature and precipitation. Each geographic county centroid is mapped to the nearest GHCN station that meets either data-completion requirement; no additional attention was given to the geographic placement of each station. Note that, for large counties or for counties with complex terrain, baseline climate can spatially vary dramatically and a single representative weather station may not well characterize the average climate of the county. In some cases, multiple counties are mapped to the same weather station. Details regarding the mapping of geographic county centroids to GHCN weather stations are given in the online supplemental files.
3) Daily projections
Both GCM output and surrogate output are treated at the monthly average level. As described in Rasmussen and Kopp (2015) and as is standard with the BCSD downscaling technique (Wood et al. 2004), to generate daily temperature and precipitation, we assume that the relationship between the monthly means and the daily values comes from a stationary distribution (e.g., Wood et al. 2002). We randomly assign each future year to a historical year between 1981 and 2010. Monthly averages are mapped to daily values from the GHCN stations using the additive relationship for temperature [Eq. (2)] or the multiplicative relationship for precipitation [Eq. (3)] from that historical year:
Where daily observations are missing from the 30-yr historical record, we fill in the missing days and months using relationships between daily and monthly values from gridded datasets and between the climatological 30-yr normal value at the GHCN station (see section “a” of Appendix A in the online supplemental material). Although a single downscaling method is employed in this study, a comparison between our BCSD-derived daily projections and alternative methods that downscale to the daily level (e.g., localized constructed analogs or an asynchronous regional regression model) could be of interest (Pierce et al. 2014; Stoner et al. 2013).
a. Temperature projections
As expected, the equal-weighted CMIP5 ensemble fails to produce the upper tail of the MAGICC6 global temperature distribution; at the 95th percentile, the CMIP5 projection for RCP 8.5 in 2080–99 is ~2°C cooler than that of the MCPR and SMME methods. For the lower tail and center of the cumulative distribution function (CDF), however, the three methods are generally within 1°C of one another (Fig. 4b).
Over the CONUS, the median change in 10-yr running-average temperature for all RCPs is roughly equivalent between the equal-weighted CMIP5 ensemble and the SMME and MCPR methods (relative to 1981–2010; Fig. 5). The upper tail from the probabilistic methods is also not well captured by the CMIP5 ensemble, however: the 95th percentile from CMIP5 is ~1°C less than that of the MCPR and SMME methods (RCP 8.5 for 2080–99; Fig. 4d). There is agreement among methods at the 95th percentile under the lowest emissions pathway (RCP 2.6 for 2080–99), however: 2.6°C (CMIP5), 2.6°C (MCPR), and 2.7°C (SMME) (Table 5). In addition, all methods generally agree with the likely (17th–83rd for SMME and MCPR; 5th–95th for CMIP5) range of 2080–99 projected CONUS temperatures (RCP 8.5): 3.4°–6.7°C (CMIP5), 3.4°–6.9°C (SMME), and 3.5°–6.5°C (MCPR) (Table 5). For CONUS subregions (defined in Fig. A7 of the online supplement), the likely range from the probability distributions (17th–83rd) is generally within 0.5°C of the CMIP5 ensemble (5th–95th) (Table 5).
By the end of the century under RCP 8.5, all methods project very similar likely ranges (17th–83rd for SMME and MCPR; 5th–95th for CMIP5) of June–August (JJA) CONUS temperature increase: 3.8°–7.4°C (CMIP5), 3.8°–7.3°C (SMME), and 3.8°–7.3°C (MCPR), with a 5% chance that average JJA temperatures could rise by as much as 9.2°C (SMME and MCPR) (Table A3 in the online supplement). Overall, late-century 5th- and 50th-percentile geographic patterns of warming are comparable among methods: in general, less than 1°C difference in most areas [both December–February (DJF) and JJA] (Figs. 6 and 7, respectively). The greatest 5th- and 50th-percentile JJA warming occurs over the upper Great Plains, the upper Midwest, and areas over the mountain states in the western United States. These areas, in addition to Alaska and New England, also warm the most during DJF by the end of the century and are relatively consistent among the three ensembles at the 5th and 50th percentiles. At the 95th percentile, JJA temperature projections of the SMME and MCPR methods are similar, with much of the CONUS and Alaska experiencing at least a 9°C rise in temperature by the end of the century. There is more disagreement for DJF, however: 95th-percentile DJF temperature increases from the MCPR method are roughly 1°–4°C warmer than those from the SMME method over the Great Plains and the upper Midwest.
To compare the influence of the different methods on projections of temperature extremes, we estimate the number of “extremely warm” days for which the maximum temperature is above 35°C and the number of “extremely cold” days for which the minimum temperature is below 0°C. Taking a population-weighted average of historical county-level daily maximum temperatures, we estimate that the average American experiences nearly 15 days each year for which the maximum temperature is greater than 35°C and 74 days for which the minimum temperature is less than 0°C (1981–2010). By 2080–99 under RCP 8.5, the CMIP5, MCPR, and SMME methods all project that the number of extremely warm days will likely (17th–83rd for SMME and MCPR; 5th–95th for CMIP5) more than triple (see Table A4 in the online supplement)—a rate that is faster than that of annual temperatures, and all methods agree that the number of extremely cold days will likely be reduced by one-half (see Table A5 in the online supplement). Population data are taken from the 2010 U.S. Census and are not projected to future time periods. In spatial terms, very few differences exist among methods in the expected (i.e., weighted ensemble average) number of projected days of extremely warm and cold temperatures (see Figs. A8 and A9, respectively, in the online supplement). The MCPR and SMME methods suggest that there is a 5% chance that the current number of days of extremely warm temperature could increase almost eightfold (see Table A4 in the online supplement) and that days of extremely cold temperature could decline ~75% (see Table A5 in the online supplement). By comparison, the hottest CMIP5 model projects roughly a sevenfold increase in extremely warm days and an ~64% decrease in extremely cold days (Fig. A10 in the online supplement).
b. Precipitation projections
For all methods that consider precipitation, we define the likely range as the 17th–83rd percentiles and the very likely range as the 5th–95th percentiles. By the end of the twenty-first century, CONUS annual precipitation will likely (67% probability; MCPR, SMME, and CMIP5) increase (Table 6). In addition, all methods project that the Northeast, Midwest, and upper Great Plains are likely to experience more winter precipitation around the same time (RCP 8.5) (see Table A6 in the online supplement). We also find that wetter springs are very likely (90% probability; MCPR, SMME, and CMIP5) in the Northeast, Midwest, and upper Great Plains and likely in the Northwest and Southeast (MCPR, SMME, and CMIP5; see Table A7 in the online supplement). An increase in autumn precipitation is likely in the Northeast, Midwest, upper Great Plains, and Southeast. In general, many of the CMIP5 models project mid- and high-latitude precipitation increases, with changes becoming more pronounced as temperature increases (see Collins et al. 2013, their Figs. 12.10 and 12.22). The MCPR, SMME, and CMIP5 projections show that the Southwest is likely to experience drier springs, whereas drier summers are likely in the Great Plains and the Northwest (Table A8 in the online supplement). CMIP5 projects slightly drier average spring conditions in the Southwest than do the probabilistic ensembles (Figs. 5e,f and Fig. A11 in the online supplement), but for other regions and time periods the median precipitation projections from SMME are slightly drier than those of CMIP5 and MCPR.
c. Sources of projection uncertainty
For decision-making purposes, it is useful to examine future climate change projection uncertainty, which can be decomposed into 1) forced, 2) unforced, and 3) scenario (i.e., emissions) uncertainties, each of which can evolve with time and location (e.g., Hawkins and Sutton 2009). Similar to Hawkins and Sutton (2009), we estimate the evolution of the fractional contribution of all three uncertainty components over the twenty-first century for global and local scales. Hawkins and Sutton (2009) assume that unforced variability is time invariant (estimated as the residual from a fourth-order polynomial fit to the modeled regional and global mean temperatures), but we instead use the time series of unforced variability calculated from pattern scaling. (Methods for all component uncertainty calculations are described in Appendix A of the online supplemental material.)
Figure 8 shows the relative importance of each of the three uncertainty components for annual temperature globally and in four illustrative locations (Los Angeles, California; New Orleans, Louisiana; Portland, Maine; and Seattle, Washington). The year 2000 is chosen as the reference point. Over the globe, unforced variability dominates in the near term but falls to less than one-half of total variance around 2020. Scenario uncertainty becomes larger than uncertainty in the forced response around 2060 (Fig. 8a). For all four locations, up until the middle of the twenty-first century, projection uncertainty from unforced variability dominates. Only in the 2050s–60s, as the variance associated with uncertainty in the forced change and in the scenario increases, does the variance from unforced variability fall to less than one-half of the total. Consistent with regional breakdowns from Hawkins and Sutton (2009), there is very little projection uncertainty associated with emissions scenarios until the 2040s.
d. Projection uncertainty due to unforced variability
Even at the global scale, the forced climate change signal can sometimes be masked by unforced variability. In most multimodel studies, a single realization of each GCM is used for the primary purpose of identifying forced trends. By contrast, several runs of the same model initialized from different initial states of the atmosphere can yield multiple estimates of weather for any given year. If external forcing is constant, differences between simulations are solely attributed to internal variability. Although they are computationally expensive, these ensembles can estimate near-term projection uncertainty due to year-to-year fluctuations in weather (e.g., Kay et al. 2015; Deser et al. 2014).
For example, Kay et al. (2015) construct a 30-member ensemble with the CESM1(CAM5) model (Meehl et al. 2013; Hurrell et al. 2013). Each member simulation uses slightly different atmospheric initial conditions while the external anthropogenic forcing remains constant (RCP 8.5). The authors calculate 10- and 20-yr global temperature trends starting from every year from 1990 to 2009 and from 2030 to 2049 and then construct histograms of the trends. The spread of each distribution is an estimate of projection uncertainty due to unforced variability (Fig. 9, red histogram).
For a particular prescribed forcing, Kay et al. (2015) note that the temperature projection spread of the 30-member CESM1(CAM5) ensemble aligns closely with that of the spread of an ensemble of CMIP5 models (both their own forced and unforced components). As an extension, we further assess whether superimposing just the unforced temperature projection components from an ensemble of CMIP5 models with the CESM1(CAM5) forced component (RCP 8.5) produces a similar range of unforced variability. To do this, we add the unforced-variability component of global temperature from each model (and model surrogate in the cases of the SMME and MCPR methods) to the CESM1(CAM5) forced component (Fig. 9). The resulting distributions of trends from both methods closely align with those from Kay et al. (2015) (red). Our approach may be beneficial for estimating projection uncertainty from unforced variability when computational resources are not available to facilitate additional ensemble simulations. These results are global; regional climates are generally more affected by unforced variability (Kay et al. 2015). Therefore, further investigation should consider how well records of unforced variability from CMIP5 reproduce the spread of local trends from multimember initial-condition ensembles.
Both the SMME and MCPR methods generate joint probability distributions of temperature and precipitation that originate from a prescribed PDF of global mean temperature. These are joint PDFs because, for each realization, we source the temperature and precipitation forced and unforced components from the same GCM. In contrast to the equal-weighted CMIP5 ensemble, which also generates joint estimates, the SMME and MCPR projections are consistent with probabilistic global mean temperature projections. The particular global mean temperature projections used consider a distribution of model parameters that is consistent with both historical observations and the IPCC’s consensus on equilibrium climate sensitivity and, thus, allow sampling of low-probability outcomes that are outside the range of GCM ensembles. Accordingly, the results of the SMME and MCPR methods are well suited for use in probabilistic risk analyses and are particularly ripe for integration with sector-specific impact models and damage functions (e.g., Deschenês and Greenstone 2011; Auffhammer and Aroonruengsawat 2011; Houser et al. 2015), including those jointly dependent on temperature and precipitation (e.g., Schlenker and Roberts 2009). Probabilistic projections facilitate impact estimates that incorporate physical climate projection uncertainty, which may be especially useful for decision-making under uncertain conditions. Furthermore, the decomposition of projection variance illustrates the importance of including unforced variability in estimates of future climate change. Applying impact functions that are based solely on forced changes would omit the primary driver of annual temperature uncertainty through the middle of the twenty-first century (Fig. 8).
The SMME and MCPR approaches span the range of possibilities regarding the correlation between GCM projections of forced changes and GCM projections of unforced change. The SMME approach assumes that these are perfectly correlated—the projected forced pattern from a given model is always used with the unforced residuals from the same model. The MCPR approach, by contrast, assumes that these are fully decoupled, which is unlikely to be true. Feedbacks between both components are possible. For instance, the external forcing may affect the properties of background variability, such as its variance. We find strong positive correlation between forced temperature change and background variability at many locations and seasons for temperature, but we find fewer cases of positive correlation for precipitation (see Fig. A12 in the online supplement). Positive correlation between components (preserved in the SMME method) could widen the probability distributions relative to assuming independence (as in the MCPR method), but, despite the differences in the approaches between the two methods, the tables indicate that there are few instances in which the distributions of 20-yr average local temperature and precipitation projections substantially deviate from one another.
As compared with temperature, regional precipitation estimates exhibit a wider range of outcomes in both the direction and magnitude of changes. This is likely due to disagreement in the response to anthropogenic forcing over the United States across GCMs (Fig. A4 in the online supplement). While future model development efforts should address these disagreements, current approaches that may narrow the range of outcomes include alternative model-weighting schemes, such as model weights that are based in part on historical precipitation performance rather than on projected global mean temperature. Another example of projection disagreement is the late-twenty-first-century median CONUS temperature anomaly (RCP 8.5), in which the MCPR projection is ~0.5°C cooler than the CMIP5 and SMME projections (Fig. 5). This difference may be due to the MCPR method selecting a greater number of models that have a cooler average forced temperature pattern over the CONUS.
Both the SMME and (especially) MCPR methods rely upon pattern scaling, the limitations of which have been extensively summarized by Tebaldi and Arblaster (2014). These limitations should be kept in mind when interpreting these results, in particular that forced patterns represent long-term averages of climate parameters and may omit nonlinear effects such as climate feedbacks that could alter rates of warming. Likewise, pattern scaling is intended for scenarios with continuously increasing forcing. For strong mitigation scenarios in which forcing can be increasing and then decreasing (e.g., RCP 2.6), separate patterns for each pathway may be more appropriate. In the MCPR approach, the scaling up of some of the coolest GCMs likely pushes the boundaries of pattern scaling and may be inappropriate. The same applies for scaling down the warmest GCMs. In contrast, SMME selectively scales GCMs on the basis of their global mean temperature, which can in turn lead to a bias in model selection. In the SMME approach, only the warmest CMIP5 models are scaled upward to represent the tail of the PDF of global mean temperature. This consequently makes the high-end projections vulnerable to the behavior of these models (e.g., MIROC-ESM-CHEM and GFDL CM3) and biases the results toward these models’ patterns of local change because they are sampled more often. This is in contrast to 1) the low end of the distribution, where uncertainty is better represented by a more diverse set of models, and 2) the MCPR approach, which considers all GCM patterns throughout the probability distribution. An exercise in which the models in the upper tail of the distribution are changed is presented in Appendix A of the online supplement.
It is important to stress that these results are conditional upon one particular PDF of global mean temperature change. These same methods from probabilizing the CMIP5 projections can, however, be employed with any PDF of global temperature change. Moreover, in the presence of deep uncertainty, it might be appropriate to apply more than one probability distribution using methods that rely on multiple priors (e.g., Heal and Millner 2014). Both the SMME and MCPR methods could be implemented in such a framework. Moreover, extreme temperature pathways above the 95% percentile from MAGICC are not robust or reliable.
Some climate risks may be less amenable to probabilistic analysis that is based on PDFs like those produced here and may instead require scenario-based, “possibilistic” analysis (e.g., Whiteman et al. 2013). These include risks arising from feedbacks that might amplify global mean temperature increase that are not captured in the SCM, such as omitted carbon-cycle feedbacks that include the release of methane from permafrost of hydrates (Archer 2007). These also include risks arising from factors affecting local projections that are poorly captured in GCMs, such as midlatitude extremes that may be influenced by the failure to properly pace Arctic sea ice loss (Francis and Vavrus 2012).
While projections from GCM ensembles like those produced by CMIP5 characterize the likely (17th–83rd percentile) range of temperature and precipitation change, they undersample extreme behavior, which may be critical for effective risk management. In this study, we present two alternative approaches for generating time series of joint probabilistic projections of temperature and precipitation that include tail risk. Projections from both probabilistic methods and an equal-weighted GCM ensemble are available online and are summarized in the text and the online supplemental appendixes for both multiple lead times and U.S. subregions.
The CMIP5 models substantially underestimate the 95th-percentile projections from the probabilistic methods. We find that by the end of the twenty-first century there is a 5% chance that annual CONUS temperature change could be as high as ~8°C over 1981–2010 levels—roughly 1°C warmer than the hottest CMIP5 model projection (RCP 8.5). We also find that there is a 5% chance that the average American could experience nearly 4 months of the year in which daily maximum temperature is 30°C or warmer. Strong CO2 emissions mitigation can greatly reduce these risks, however. Under RCP 2.6, we project that increases in CONUS temperature will very likely (90% probability) remain at or under 2.7°C by the end of the century and that the number of extremely warm days experienced by the average American could coincidently remain below ~40 days yr−1.
Decomposing GCM output into forced and unforced components of climate change through pattern scaling can provide records that are useful for uncertainty quantification. We find that uncertainties associated with local temperature projections through 2050 are almost entirely due to unforced variability, with a small fraction arising from uncertainty in the forced component of climate change. By the end of the twenty-first century, uncertainty associated with CO2 emissions dominates both at global and local scales.
We thank M. Oppenheimer for helpful discussion and two anonymous reviewers for their comments. DMR and REK were supported by the Risky Business Project and by the Climate Impact Lab through the University of Chicago 1896 Fund. We acknowledge the World Climate Research Programme’s Working Group on Coupled Modeling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table A2 of the online supplemental material) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals.
Supplemental information related to this paper is available at the Journals Online website: http://dx.doi.org/10.1175/JAMC-D-15-0302.s1.
Current affiliation: Woodrow Wilson School of Public and International Affairs, Princeton University, Princeton, New Jersey.