1. Introduction
Understanding future precipitation changes in a warming world is critical to empower communities to make informed decisions around adaptation or climate-related policy. Precipitation provides drinking water, is relied on for agriculture, and is used in many sectors of industry, so changes in water availability need to be understood to make the most of this limited resource. Droughts cause severe strain on people and ecosystems. Storms and extreme rainfall events also cause flooding and destruction. Worldwide, flooding affects more people than any other natural disaster (Wallemacq and House 2018).
Unfortunately, given the importance of precipitation for daily life, future changes in precipitation are much less certain than temperature changes (Collins et al. 2013; Tebaldi et al. 2011). In this study we look at low levels of global warming, in particular 1.5° and 2°C, which are relevant to the Paris Agreement and associated policy decisions. A challenge relating to these levels of warming is that the signal of precipitation changes can be difficult to distinguish from the noise because the changes are often small relative to internal variability (Hawkins and Sutton 2011) and require larger ensemble sizes to detect than temperature trends (Deser et al. 2012). There are nonlinear effects in the climate system and differences between transient and equilibrium climate response, so changes based on higher levels of warming cannot simply be used to estimate impacts for 1.5° and 2°C (Good et al. 2016; Mitchell et al. 2016). Furthermore, precipitation events are tightly connected to atmospheric and ocean dynamics and changes are seasonally dependent so interpreting changes in precipitation and their impacts requires careful analysis.
The most common approach when investigating future changes of precipitation is to use general circulation models (GCMs) that dynamically simulate the physics of the atmosphere and ocean. Different GCMs use varying representations of the physics, so model intercomparison projects (MIPs) are frequently used to provide a range of different possible futures. The MIPs used in this study (also referred to as modeling activities) are phases 5 and 6 of the Coupled Modeling Intercomparison Project [CMIP5 (Taylor et al. 2012) and CMIP6 (Eyring et al. 2016; O’Neill et al. 2016)], the Half a Degree Additional Warming, Prognosis and Projected Impacts project (HAPPI) (Mitchell et al. 2017), the 2018 U.K. Climate Projections (UKCP18) (Murphy et al. 2019), and the High-End Climate Impacts and Extremes project (HELIX) (Wyser et al. 2017). MIPs provide a common experimental protocol under which multiple modeling groups run simulations to produce multimodel ensembles of climate projections. The use of MIPs has been a successful approach, and Fig. 1 shows that around half of the impact studies in the Intergovernmental Panel on Climate Change (IPCC) Special Report on Global Warming of 1.5°C (IPCC 2018) result directly from one of these MIPs.
In producing their assessment reports, the IPCC strives to compile information across all of the available literature. However, it relies heavily on using the latest modeling intercomparison project to determine the likelihood of changes in climate. For example, in the IPCC Fifth Assessment Report (AR5), the CMIP5 results were compared with the previous activity (CMIP3) to see how they differ. However, the keynote plots in the IPCC “Atlas of Global and Regional Climate Projections” were solely from the CMIP5 ensemble. In the coming years, there will be a strong focus on analyzing the latest results from CMIP6, which will contribute to the IPCC Sixth Assessment Report (AR6). CMIP6 has a broad sample of current model diversity with a generally higher model complexity than CMIP5, so there are many benefits to using this new resource. However, single MIPs such as CMIP5 can underestimate the possible range of future climate change (Deser et al. 2020). On the other hand, GCMs have a range of climate sensitivities to greenhouse gas forcing (Sherwood et al. 2014) and CMIP6 is known to have a large proportion of high-climate-sensitivity models (Zelinka et al. 2020), which may overestimate the upper bound of warming (Tokarska et al. 2020). So especially in regions with low confidence in precipitation change, it could be counterproductive to disregard the huge resource of previous climate model results and focus on CMIP6 alone.
Within each MIP, a common experimental design is used. However different experimental designs can lead to differing impacts of 1.5°C warming, related to factors such as the rate of global warming and the aerosol forcing relative to greenhouse gas forcing (Seneviratne et al. 2018; King et al. 2018). The large CMIP5 and CMIP6 activities use a number of different emissions scenarios, so do include a measure of scenario uncertainty. However, there are other uncertainties relating to experimental design, such as the use of high-resolution cloud or convection resolving models compared to models that parameterize these processes, or the inclusion of carbon-cycle feedbacks compared to prescribed greenhouse gas forcing. The differences in climate response between transient and equilibrium climate are also difficult to diagnose using traditional scenario-based MIPs, which produces another source of experimental design uncertainty that is relevant to policy decisions. Our study aims to take the comprehensive approach of analyzing results from MIPs that use different modeling approaches. Here we examine uncertainty that is due not just due to different emission pathways in a single MIP, but also to differing experimental setups in different MIPs.
There is a risk that relying on a single MIP may result in overconfidence in climate projections by missing some uncertainty due to experimental design. In addition, considering different emissions pathways at lower levels of warming can give different precipitation changes (Mitchell et al. 2016). On the other hand, comparisons between CMIP3 and CMIP5 high emissions pathways show consistent changes in seasonal precipitation (Knutti and Sedláček 2013), which increases the confidence in those results. Hence determining agreement in precipitation projections can enhance (where they agree) or reduce (where they disagree) our confidence in the individual projections.
In Fig. 1, only a very small proportion of studies considered a combination of approaches to obtain multiple lines of evidence about future changes. Combining large multimodel ensembles of simulations with differing experimental design and skill at representing the current climate is not straightforward. We note that it is not always clear that improved model skill for the present day will result in improved future projections (Knutti et al. 2010). However, there is ongoing work with regard to weighting simulations depending on their representation of relevant climate phenomena or relation to other simulations (e.g., Sanderson et al. 2017a; Merrifield et al. 2020; Brunner et al. 2020). This has the potential to constrain the likely range of future projections, for example by down-weighting high climate sensitivity models that give poor performance over the historical period.
This study focuses on the agreement across multiple modeling activities of estimates of precipitation change at specific levels of global warming (e.g., 1.5° and 2°C). We compare changes in yearly mean precipitation and the yearly maximum of daily precipitation (“extreme precipitation”). We use averages over land of updated reference regions created for the IPCC AR6 (Iturbide et al. 2020; see Fig. S1 in the online supplemental material) to investigate different regional signals. Time slices of transient simulations are used to examine specific levels of global warming. We consider each of the MIPs used in this study as providing plausible representations of future climate and do not weight any one higher than the others. This is reasonable given their individual use in different analyses of projected precipitation change.
We first show the agreement in sign of significant changes to 1.5° and 2°C warming across the five climate modeling activities. This approach identifies regions where modeling activities agree in a significant change, and regions in which the change is more uncertain. The significance is determined from the 5%–95% confidence intervals of the “central estimates” calculated for each MIP. The central estimate is calculated by combining the model estimates within each MIP, taking into account the model spread and sampling uncertainty for each model. A combined central estimate for results across the MIPs is also calculated.
In addition to showing the combined changes and whether the changes are significant, we also consider uncertainty in each of the modeling activities’ results and the combined central estimate. The magnitude of uncertainty bounds and the extent of overlap between uncertainty estimates is explored. Furthermore, to dig deeper into uncertainty due to experimental design, we undertake comparisons between changes calculated for different experimental designs or scenarios. This is done using two individual models that each have large ensembles of simulations, as well as by comparing different scenarios within the CMIP5 and CMIP6 activities.
This analysis illustrates the potential of combining the agreement across different modeling activities with a more detailed examination of experimental design using single-model large ensembles. This approach provides a fuller picture of the so-called method uncertainty in these climate modeling activities. This is something that is difficult to quantify but is essential to address, especially in regions where the changes are not as clear as a single modeling activity would indicate.
2. Materials and methods
Methods for analyzing results from GCM simulations are presented below. Information about the specific climate model datasets is given in the appendix.
a. Climate indices and regions
For this analysis, we focus on two precipitation indices. We use the annual mean precipitation (referred to as “mean precipitation”) and the yearly maximum of daily precipitation (referred to as “extreme precipitation”). The mean precipitation is used to indicate whether there is a change in the total amount of precipitation over a region. The extreme precipitation index is used to indicate whether there will be a change in the magnitude of precipitation in heavy rainfall events or storms. When looking at impacts in specific sectors and local scales, indices that capture seasonality are also very useful, but we chose these two indices as they are widely applicable on a global scale.
When calculating the changes in precipitation between different specific warming levels, we focus on the percentage changes, to show the changes relative to the model climatology. This gives a normalized metric of changes to reflect that a mean change, for example 0.2 mm day−1, in a low-rainfall area is likely to make a larger impact than the same change in a region with very high rainfall. The use of relative changes does mean that in the presence of model biases the same absolute change in precipitation will appear as different percentage changes. In addition, in areas of very low precipitation, showing percentage changes of relative changes may overemphasize small changes in precipitation. To support these analyses, we additionally show results of absolute changes (mm day−1) as online supplemental material.
For analysis of changes, region definitions were used as per Iturbide et al. (2020). These regions were developed as an update to regions used in the IPCC AR5 and the IPCC SREX report, using smaller regions in some parts of the world to achieve better climatic consistency within each region. A map of these regions is shown in Fig. S1 in the online supplemental material, labeled with the acronyms used for each region. The precipitation indices were first averaged over these regions before calculation of changes. Note that in the regions analyzed here, averages were calculated over land points only.
b. Extracting 1.5° and 2°C time slices
Transient GCM experiments are designed around simulations of the historical period then continuing into the future using scenarios representing different emissions pathways. From these simulations, we can then determine the climate state when these scenarios reach different levels of global warming. In this study, we use a commonly used approach of selecting time slices (King et al. 2017; James et al. 2017). This approach does have the limitation that climate from a transient climate simulation can differ from simulations stabilized at the same level of warming due to effects that lag behind the warming of the atmosphere (e.g., ocean circulation and sea level rise) (Manabe et al. 1991; Held et al. 2010). The alternative is to compute targeted simulations that stabilize at each specific level of warming, but this has only been done in a few cases (e.g., Sanderson et al. 2017b), so using time slices of transient simulations is still a widely used method.
First, a baseline is chosen as the start of the historical period (e.g., 1861–1900) to calculate the preindustrial reference temperature. Then 21-yr time slices are chosen for the first period that has the global mean temperature averaged over the time slice reaching the specific warming levels of 1.5° and 2°C relative to the baseline. For current climate, time slices for the warming level of 0.9°C are used to match observed warming to 2010. This is done, rather than taking a fixed time period, to keep the warming between the current and 1.5°C time slice consistent, and thereby accounting for the variation in climate sensitivities between models. We note that this will inevitably result in there being different aerosol forcings between models in each of the current, 1.5°, and 2°C warmer worlds. As the historical simulations are not necessarily long enough to capture our current climate period (in CMIP5 they finish in 2005), they are extended by future scenario simulations where necessary. When more than one future scenario was available, the highest emission scenario was used to extend the historical simulation for the current climate period. This prevents low climate sensitivity models (which reach 0.9°C later) from having current climate time slices as far into the future scenarios as would be the case using low emissions scenarios. Note that the current climate time slices are referred to as “Hist” in some figures.
For CMIP5 and CMIP6, simulations from all future scenarios available are included in the analysis to maximize the number of samples. The exception to this is the results for section 3c, where the changes calculated using low- and high-warming scenarios were compared.
In this study we aim to keep the methodology of extracting specific levels of warming as consistent as possible. However, different experimental designs do mean that the time slices need to be calculated in different ways in some cases. These differences are described in the appendix for each dataset where relevant.
c. Statistical analysis
To estimate the change in a particular variable between two time slices, all of the years in each time slice for each model are pooled together. Then the ensemble mean response is determined based on all years of data for that particular model. The uncertainty range in the mean response is determined by randomly resampling each distribution with replacement 1000 times and calculating the mean response from each sample. The 5th–95th percentile range of the samples then gives the sampling uncertainty in the mean change.
When determining the significance of multimodel changes, for example in the IPCC report, it is common practice to use significance tests to determine whether changes are distinguishable from natural variability alongside thresholds for the proportion of models agreeing on the sign of the change (e.g., Tebaldi et al. 2011). However, these types of approaches do not provide a confidence interval around the multimodel change, making it difficult to combine uncertainty estimates of different multimodel datasets together.
Here, to combine each of the model estimates into to a multimodel summary or so-called central estimate, we use the random-effects meta-analysis method (Cochran 1937; DerSimonian and Laird 1986). This methodology is commonly used in clinical studies to combine central estimates and uncertainty ranges of different studies together and was applied to climate models in Uhe et al. (2019). Such a statistical approach takes into account both the sampling uncertainty from random resampling si and the model spread σ, which is taken as the standard deviation of the central estimates. From these quantities, a combined central estimate of change and an estimate in the uncertainty in that value are derived.
This calculation of central estimates is applied to combine different model estimates for each of the MIPs, and also finally to combine the central estimates of each MIP into an overall combined central estimate. The changes are referred to as statistically significant if the 5%–95% confidence interval does not include zero.
3. Results
a. Regional changes and agreement
To evaluate the confidence in large-scale patterns of precipitation changes, we use agreement between climate modeling activities. Figure 2 shows the agreement of changes between current climate and 1.5°C or 2°C, across our five MIPs, for mean and extreme precipitation. Agreement here is represented by the number of modeling activities that show a significant change (i.e., the 5%–95% confidence interval not including zero).
In Fig. 2, regions are marked with hatching where there are conflicting but significant changes from two different MIPs. Encouragingly, this shows that there are only a few regions for mean precipitation (northern Central America, the Sahara, southern East Africa, and southern South America) where two different modeling activities have significant changes with opposite signs, between current climate and 1.5°C. For the changes to 2°C, this is reduced to just southern South America.
CMIP6 is the latest MIP, using current state-of-the-art climate models, and will underpin most of the conclusions described in the IPCC AR6. For this reason, we highlight regions where CMIP6 does not agree in the significance of the changes with the majority of other modeling activities. In Fig. 2, thick outlines indicate where CMIP6 gives a different sign or significance in the changes to three of the other four modeling activities. This identifies vulnerable regions such as some parts of South America or Africa, where using information from CMIP6 alone may misrepresent our confidence in the precipitation changes to 1.5°C of global warming. We note that these are not indicating that the using CMIP6 results in a different sign to significant changes given by other MIPs; rather, it may give a significant change where most other MIPs show only insignificant changes, or vice versa. However, this is still an important point because it is relevant to the confidence statements produced by the IPCC (or other major reports) that may be considered by decision makers with regard to climate change planning.
Figure 3 shows the percentage changes in mean and extreme precipitation, from the combined central estimate of the five modeling activities. To highlight the confident changes, regions where the combined central estimate gives a significant change are marked with a bold border in Fig. 3. We additionally include the same changes, but calculated in millimeters per day, in Fig. S2 in the online supplemental material. For breakdown by modeling activity, Figs. S3 and S4 in the online supplemental material show the changes and the significance of the central estimates for each MIP, in percentage change and mm day−1 respectively.
From Figs. 2 and 3, we see that the precipitation changes in North America and Eurasia show the strongest agreement, especially at the lower warming level of 1.5°C. For changes in the Southern Hemisphere and some equatorial regions, there is often less agreement. Hence, in these regions, the use of a single modeling activity (as most studies have done) risks creating false confidence in the changes.
Changes in extreme precipitation show a large amount of agreement. At 2°C warming, the majority of modeling activities show confident changes in almost all regions (except the Sahara and Caribbean regions). This higher confidence in extreme precipitation has been reported previously (Allen and Ingram 2002; Fischer et al. 2014; Pendergrass et al. 2015). This is due to thermodynamics dominating extreme precipitation changes, while mean precipitation will be more strongly influenced by dynamical (i.e., atmospheric) circulation changes, which have less certainty and more disagreement. We also note that there are increases in extreme precipitation in regions that show drying changes in the mean precipitation. This increase in extreme precipitation could be part of the source of uncertainty in mean precipitation drying, due to the extreme precipitation contributing different fractions of the total precipitation in different models.
The level of agreement between the different modeling activities in the precipitation changes is also strongly connected to the strength of the changes. Figure S5 in the online supplemental material shows the signal-to-noise ratio for each of the modeling activities, where the noise represents the magnitude of the 5%–95% confidence intervals. Here we see that the areas that have the highest agreement also have the strongest signal-to-noise ratio. A useful metric to measure of magnitude of changes is the internal variability of the system, and Fig. S6 in the online supplemental material shows normalized changes, representing the amount of the change relative to the variability simulated by each model. This highlights that at these small levels of global warming, many of the changes are smaller than the year-to-year variability; however, they can be detected confidently by using the large number of samples in these MIPs.
In addition to the agreement in the sign of the precipitation changes, it is relevant to understand whether the uncertainty ranges in changes using each modeling activity overlap. For this, we consider the changes in mean precipitation between current climate and 1.5°C warming. Figure 4 shows the amount of overlap between the confidence interval for each MIP and the confidence intervals calculated for the combined central estimate of the other MIPs. In nearly all regions there is some overlap between the modeling activities, so it is rare for the central estimates of each modeling activity to completely disagree. We note that in Fig. 4, a value of 100% does not necessarily indicate perfect agreement. Instead, it can reflect a larger uncertainty range in the changes for a given MIP that encompasses the combined central estimate for the other MIPs. Part of this may be due to the nature of the combined central estimate, which can have a smaller uncertainty range if the models are in agreement, reflecting the greater number of samples included. Figures S7–S9 in the online supplemental material show similar results for extreme precipitation and 2°C warming.
We highlight that the HAPPI activity shows more regions where the central estimate disagrees with the other modeling activities. This may be partly due to the large initial condition ensembles within HAPPI resulting in smaller uncertainty bounds, but at the same time not including uncertainty in the ocean and sea ice responses, hence giving overconfident estimates. HAPPI and HELIX also exhibit a tendency to give different results in some northern regions, which may indicate an influence from the prescribed sea ice that is used in their atmosphere-only simulations. Looking at Fig. 4 and Figs. S7–S9, there is no activity that agrees with the combined result from the other activities in all cases. This finding provides substantial support to the benefit of considering a range of modeling activities.
b. Partitioning of uncertainty
When considering the confidence of a particular model result, understanding the source of uncertainties can be highly illustrative. We consider three types of uncertainty: sampling uncertainty, intermodel uncertainty, and experimental design uncertainty (the last of which is considered in detail in section 3c).
We consider first sampling uncertainty within a single model projection, calculated as per section 2c. This uncertainty is related to the internal variability in the climate system and the number of years of simulation included in the sample. To reduce the uncertainty in a single model response, modeling centers generate ensembles of simulations, usually produced by initial condition or physics parameter perturbations. We also consider the uncertainty in the central estimates for each MIP. We note that the central estimate uncertainty is not an independent quantity but is calculated on the basis of the confidence intervals of each model, as well as the spread of model changes. We finally consider the combined central estimate uncertainty.
Figure 5 shows these different quantities of uncertainty in the combined projections to 1.5° and 2°C. Four regions are shown as illustrative examples. With regard to sampling uncertainty (i.e., single model uncertainty), HAPPI, which uses large ensembles, has a much smaller sampling uncertainty than CMIP5 and CMIP6, which mostly have fewer than three historical simulations per model (see Tables S1–S3 in the online supplemental material for ensemble sizes). HAPPI simulations also use atmosphere-only models, forced by a single set of prescribed sea surface temperatures, so HAPPI may represent a smaller range of possible futures compared to the full spread of coupled ocean–atmosphere models.
In Fig. 5, the combined central estimate uncertainty is at the lower end of the single MIP central estimate uncertainties. This finding is a result of the construction of the central estimate “narrowing in” on the most plausible response as more samples are available. We note though that this is a purely statistical approach to determining the uncertainty range. In terms of ability to model the climate system, outlier models may be just as plausible, despite lying outside our central estimate uncertainty. Other things that could be considered are model interdependencies (e.g., different models sharing code or components) (Knutti et al. 2013).
We also note that in a commonly pictured view of model uncertainty (Hawkins and Sutton 2009), the model uncertainty in a given variable increases over simulated future times. This increasing spread is partly because different models have different climate sensitivities and therefore warm at different rates. However, because we are examining model projections at specific levels of warming, any first-order differences that are due to climate sensitivity will not be included in our uncertainty estimates. Lehner et al. (2020) showed that model uncertainty for global mean precipitation also increases with global warming, with small differences between CMIP5 and CMIP6, but here we look at uncertainty for a few specific regions. In Fig. 5, we show that while the central estimate uncertainty does generally increase, there are cases where it stays constant or decreases as global warming increases, such as the HAPPI projection of mean precipitation over the Mediterranean or the UKCP18 projections of extreme precipitation over western central Europe. Where the single model (sampling) uncertainty does not show substantial changes, we expect the changes in central estimate uncertainty to relate to model uncertainty. In other regions shown here, the uncertainty is similar or increases as warming rises from 1.5° to 2°C, but this highlights that the use of specific levels of warming can constrain the uncertainty.
c. Differences in experimental design and scenarios
In addition to the uncertainty at the model or MIP level, there is uncertainty due to the experimental design of each modeling activity. The previous section considered the uncertainty across the MIPs; however, this is not the same as the experimental design uncertainty. Because each of the MIPs uses different models (and different generations of models), it is not possible to formally connect the multi-MIP spread directly to the experimental design. However, the experimental design uncertainty can be related to the choice of scenario and forcing datasets used to run the future projections. The experimental design uncertainties may also involve more structural differences—for example, the use of atmosphere-only compared to coupled ocean–atmosphere models, or the choice of using a dynamic carbon cycle with emissions prescribed rather than GHG concentrations.
To isolate the influence of experimental design on the future projections, one approach is to use single-model large ensembles. Where these large ensembles have produced simulations using multiple modeling protocols, we can compare their responses at specific levels of warming. For this analysis we have used the CanESM2 large ensemble (Kirchmeier-Young et al. 2017) using the RCP8.5 scenario from CMIP5 and compared it with the CanAM4 (the atmospheric component of the CanESM2 model) simulations produced using the HAPPI scenarios. Second, we have compared the CESM large ensemble (Kay et al. 2015) using the RCP8.5 scenario from CMIP5 with the CESM low-warming simulations (LowWarm) using emissions pathways designed to stabilize at 1.5° or 2°C (Sanderson et al. 2017b).
Figure 6 shows the comparison between the experimental designs over different regions. Differences for CanESM2 are shown in the upper panel and differences for CESM are shown in the lower panel. The differences shown are for percentage changes in mean precipitation between current climate and 1.5°C, comparing the two experimental designs. Regions that are thickly outlined are where the significance or sign of the change is different between experimental designs. For both models, there is a clear difference over the Americas where the stabilized scenario (HAPPI or LowWarm) becomes wetter relative to the transient RCP8.5 simulations. Similar differences are seen over Asia, although with less consistency. An opposite difference is seen over the North and East African regions, and parts of Australia.
Two factors causing a difference between the stabilized and transient simulations are the differences in non-greenhouse gases such as anthropogenic aerosols, and the differences in the in land–sea contrast driven by the land warming faster than the ocean. Anthropogenic aerosols are projected to be significantly reduced by the end of the twenty-first century, which is reflected in the stabilized scenarios. The transient simulations, however, may pass the 1.5°C temperature threshold before the mid-twenty-first century, and so will have significantly higher modeled aerosol loads. This may be reflected in the relatively strong differences in East Asia in Fig. 6, particularly for CESM. We note that models with different representations of aerosols will give differing changes, which may be a source of model uncertainty in the multimodel analysis, for areas of high aerosol forcing.
We investigate spatial patterns of changes over the oceans in Fig. S10 in the online supplemental material, which is as per Fig. 6 but instead showing model grid cells rather than regional averages. There are strong positive precipitation anomalies on the Pacific equator indicating differences in the Pacific intertropical convergence zone between stabilized and transient simulations. This could be related to differences in the north–south warming contrast between the experiments. Also, in Fig. S10 there is a pattern of wetting over the Atlantic Ocean and drying in the north of Africa in the stabilized experiments relative to RCP8.5. This may be due to the land–sea contrast from the Sahara region warming much faster than the Atlantic Ocean in the transient simulations. In the stabilized experiments, the Atlantic Ocean warming may catch up, causing this difference.
We additionally look into the differences between low- and high-warming scenarios for the CMIP5 and CMIP6 ensembles in Fig. 7. These do show some regions where the significance of the change is different between scenarios. Differences here are important when considering the implications of following a low emissions pathway, and in these thick-outlined regions (covering large parts of America and Africa), careful evaluation of the different scenarios should be performed separately. The differences here are smaller than the single-model differences in Fig. 6, probably due to differing responses in models within CMIP5 and CMIP6. There are also only a few regions that show notable changes that are consistent between CMIP5 and CMIP6 (e.g., parts of in central America, central Africa, and New Zealand). Other regions have small differences or are not consistent between CMIP5 and CMIP6.
The smaller difference in these scenarios for CMIP5 and CMIP6 in many regions may be attributable to the low warming amount of 1.5°C. The CMIP models are not in equilibrium by the time they reach 1.5°C of global warming, even for the low-warming scenarios, so the comparison in Fig. 7 does not clearly represent an equilibrium versus transient climate in the same way as in Fig. 6. In addition, the differing model responses and the small number of ensemble members make it difficult to identify any signal due to scenario differences for this analysis. Again, this shows the value of the single-model large ensembles used above.
4. Discussion and conclusions
Uncertainty arising from differences between climate modeling activities is often ignored in climate change studies and reports. As these studies form the basis for climate change policy, “method uncertainty” is essential for reliable confidence statements of precipitation change.
This article presents a statistical method to combine projected estimates of change from multiple modeling intercomparison projects. This involves producing a 5%–95% confidence interval, which is used to determine a statistically significant change. This approach has the advantage that the uncertainty range is determined from the sampling uncertainty of each model and the spread across different model changes, and it does not rely on arbitrary thresholds such as percentage of models that agree. We argue that using such a method and evaluating the agreement between modeling intercomparison projects and the combined central estimate from a range of different projects gives a quantification of the method uncertainty.
This study shows the agreement in precipitation changes between five different modeling activities. For mean precipitation, just over half of the regions have a significant change in the majority of modeling activities for changes to 2°C. In contrast, for increases in extreme precipitation there are significant changes for the majority of the MIPs almost everywhere by 2°C warming. With regard to the magnitude of possible changes, we also show that there is no single modeling activity that captures the full range of changes estimated by the other MIPs in all cases.
We note that drying is less confidently predicted than the wetting. Drying in mean precipitation can occur while the extreme precipitation is increasing, which may obscure some of the signal. Another consideration is that the region definitions themselves may not enable identification of drying on smaller spatial scales. The nature of precipitation as a positive quantity also sets an upper bound on the possible amount of drying, particularly in already dry regions, which may cause the wetting changes to overcome drying over larger regional averages. It is also possible that the location of the drying regions is slightly different between models and that calculating a multimodel mean results in a loss of signal (e.g., Knutti et al. 2010). Nonetheless, model spread and disagreement across modeling activities need to be taken into account when evaluating risks associated with these changes. More detailed seasonal level analysis of these regions also will supplement these findings.
Furthermore, it is necessary to understand the sources of uncertainty in each of the modeling activities, and the method they use to determine future changes in climate. The CMIP5 and CMIP6 projects provide a large structural sample by including many coupled ocean–atmosphere models but have limited numbers of simulations per model. The HAPPI project contains a range of models and has large ensembles to reduce the sampling uncertainty, but only one representation of possible sea surface temperature change. UKCP18 is dominated by a single model, but one that is from the latest generation of models and is higher resolution than most models in the other MIPs, potentially capturing phenomena not resolved by coarser GCMs. Last, HELIX contains two high-resolution atmospheric models, and spans a range of possible sea surface temperature trends estimated from different CMIP5 models. These factors contribute to different effective degrees of freedom and reliability of each ensemble (e.g., Yokohata et al. 2013), resulting in different estimates of uncertainty and ranges of possible future changes.
To help identify the most likely future changes, increasing the number of models gives a better idea of all of the possible climate responses. In this method, including more samples in the central estimate reduces the uncertainty by narrowing in on the forced change (where models agree). However, this does not necessarily remove the possibility of the true changes being outside our confidence intervals, where there are outlier models. Unless there are physical reasons to exclude a particular outlying model, they should still be considered plausible scenarios. We note that the multimodel “central estimate” changes represent the mean change in the metrics considered and do not span the full model spread including outliers. For purposes of risk assessments, worst-case projections that are based on the full probability distribution of projections (e.g., Sutton 2019; Quinn et al. 2013) should be used in addition to the central estimate. This can take into account changes in variability and likelihood of particular extreme events occurring, which is important for decision making. We note that combining projections of extremes from atmosphere-only and coupled ocean–atmosphere model activities could be more problematic, as the SST-forced simulations exhibit a smaller range of variability due to sampling a smaller range of possible climate states (Fischer et al. 2018). Therefore, a multi-MIP analysis of extreme weather events may benefit from including a method of correcting variability (e.g., Bellprat et al. 2019) or by restricting to similar model configurations (e.g., coupled model only).
In this study, we chose a method to produce the multimodel central estimates that does not account for model skill. Models have different biases and skill in representing historical climate change. SST forced atmospheric models for example generally have lower biases than coupled models (He and Soden 2016), and model developers are constantly working to improve their model’s performance, which may result in differences between generations of models. Because of this, it may be desirable to weight models, for example on their representation of different aspects of current climate (Sanderson et al. 2017a; Shiogama et al. 2011; Knutti 2010). Including model skill in the analysis could give greater (or lower) weighting to outlying results from models that are better (or worse) at representing a specific phenomenon. The approach of considering all models to be equal is a limitation of our method, and exploring this further will add to the conclusions of this study.
In our analyses, we consider the projections of each MIP equally plausible when combining their estimates. In reality, the projections of specific MIPs are not equal and will have strengths and weaknesses. However, because it is common practice in the scientific literature to base conclusions on a single MIP, we combine these separate estimates without giving one higher consideration than the others. Separate from the issue of how realistic the projections are, there are various interdependencies between the MIPs. These can be due to including models with commonalities (e.g., different generations of the same model or different models with shared components) (Knutti et al. 2013). In addition, the HAPPI and HELIX projects use SST projections based on output from CMIP5 and UKCP18 also includes some results from CMIP5. When combining results from different MIPs, adding additional independent data sources should increase the confidence of our projections. However, the presence of common information could narrow the uncertainty range in an unrealistic way by treating data with similar origins as independent sources. As such, the use of the combined central estimate should be used to complement an evaluation of different MIPs rather than replacing such an analysis. A future refinement of the methodology used here could take into account factors such as the interdependence of the MIPs, skill of models within the MIPs and abilities of the MIPs to sample a wide range of plausible future states. We expect that such weighting of MIPs would modify the overall confidence ranges produced by this analysis; however, the details of this weighting are beyond the scope of this work.
Another limitation of combining results from different modeling activities is that the results may be harder to interpret. The combined results do not have the same specificity about the experimental design as do results that, for example, reflect the trajectory of a single future scenario. The combined central estimates presented here reflect possible changes to 1.5° and 2°C, but if there are differences important for policy reasons such as between transient and stabilized climates (e.g., Zappa et al. 2020; King et al. 2020), this may necessitate considering a smaller number of simulations that are relevant to the specific question at hand.
Use of single-model large ensembles also has the potential to disentangle the uncertainty due to differences in model responses and experimental design. In Fig. 6, we use two large ensembles to show differences in precipitation response between transient and stabilized climate scenarios. As more of these ensembles become available (e.g., Deser et al. 2020) they will be a valuable tool for comparing results across MIPs with consistent model structures.
This study emphasizes that analyzing precipitation changes using a single MIP does not fully take advantage of previous modeling work. The IPCC AR6 is likely to focus on results from CMIP6 at the expense of previous activities; however, this may overestimate the confidence in precipitation changes. Furthermore, in some cases, using CMIP6 on its own gives different changes compared to other methods used here. Combining information from different modeling activities will improve our understanding of confidence in the changes and where the uncertainty lies, and such an approach should be adopted when formulating climate policy.
Acknowledgments
We acknowledge the World Climate Research Programme, which, through its Working Group on Coupled modeling, coordinated and promoted CMIP5 and CMIP6. We thank the climate modeling groups for producing and making available their model output, the Earth System Grid Federation (ESGF) for archiving the data and providing access, and the multiple funding agencies who support CMIP5, CMIP6, and ESGF.
This research used science gateway resources of the National Energy Research Scientific Computing Center, which is a U.S. DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract DE-AC02-05CH11231. The work of author Betts and the production of the HELIX simulations were supported by the European Union Seventh Framework Programme FP7/2007–2013 under Grant Agreement 603864 (HELIX: “High-End cLimate Impacts and eXtremes”; https://www.helixclimate.eu) and the U.K. BEIS/Defra Met Office Hadley Centre Climate Programme (GA01101). Author Bates is supported by a Royal Society Wolfson Research Merit Award. Author Huntingford acknowledges the Natural Environment Research Council National Capability award to the Centre for Ecology and Hydrology. Author King is funded by the Australian Research Council (DE180100638). Author Sanderson is funded by the French National Research Agency, project number ANR-17-MPGA-0016. Author Shiogama was supported by the Integrated Research Program for Advancing Climate Models (JPMXD0717935457) and the Climate Change Adaptation Research Program of NIES.
Data availability statement
Climate simulations used in this study are freely accessible to the public with the exceptions of the HELIX data, which are available by request to Richard Betts.
APPENDIX
Climate Model Datasets
a. CMIP5
Phase 5 of the Coupled Modeling Intercomparison Project (CMIP5) (Taylor et al. 2012) is the modeling effort used as the basis for the IPCC AR5. It involved a large number (>30) of different climate models, and in this study we use the historical simulations and future scenarios following the representative concentration pathways (RCPs) specified in the CMIP5 protocol. The models included in this study and the number of ensemble members used for each level of global warming are given in Table S1 in the online supplemental material.
b. CMIP6
Phase 6 of the Coupled Modeling Intercomparison Project (CMIP6) (Eyring et al. 2016) is designed to inform the IPCC Sixth Assessment Report. At the time of writing, new simulations from CMIP6 are still being added to the CMIP6 archive. Therefore, estimates of change using this dataset may change as additional models are included. In this study we use the historical simulations and future scenarios following Shared Socioeconomic Pathways (SSPs) from the ScenarioMIP activity (O’Neill et al. 2016). The models included in this study and the number of ensemble members used for each level of global warming are given in Table S2 in the online supplemental material.
c. HAPPI
Simulations run for the Half a Degree Additional Warming, Prognosis and Projected Impacts project (HAPPI; Mitchell et al. 2017) are 10-yr atmosphere-only climate simulations, forced by sea surface temperatures (SSTs), sea ice concentration (SIC), and greenhouse gas concentrations. The present-day period used in HAPPI is 2006–15, and it uses observed SSTs from the OSTIA observational dataset (Donlon et al. 2012). SSTs from CMIP5 model output are used to estimate the future scenarios corresponding to 1.5° and 2°C global warming. These simulations are targeted to simulate 1.5° and 2°C warming and so do not require calculation of time slices.
Large ensembles were produced by running simulations with different initial condition perturbations. The models included in this study and the number of ensemble members used for each level of global warming are given in Table S3 in the online supplemental material.
d. UKCP18
The 2018 U.K. Climate Projections (UKCP18) global 60-km product (Murphy et al. 2019) was used. This consists of a perturbed physics ensemble of 15 HadGEM3-GC3.05 simulations supplemented by 13 CMIP5 projections, each from different models. These simulations follow the RCP8.5 protocol and time slices for specific levels of warming have been extracted using the same method as per CMIP5 and CMIP6. This dataset was developed to make use of the higher resolution and more complex physics of HadGEM3-GC3.05 than is available in current MIPs.
e. HELIX
High-End Climate Impacts and Extremes (HELIX) was a major research program funded by the European Commission to assess the impacts of climate change at different levels of global warming. It included the production of climate projections using high-resolution global atmospheric models. Two models were used: EC-EARTH3-HR, with resolution nominally corresponding to 40 km, and the HadGEM3-A Global Atmosphere (GA) 3.0 model (Betts et al. 2018) at a resolution of 60 km. These models were each forced by SSTs and sea ice from six different CMIP5 models, plus an additional earlier model (HadCM3LC) for EC-EARTH-HR only. This allows the atmospheric models to sample a range of different ocean responses.
The simulations were run from the historical period to 2100 using the RCP8.5 scenario. See Wyser et al. (2017) for details of these simulations. Time slices were chosen for the 1.5° and 2°C specific warming levels as specified in the HELIX methodology. We chose to use the current climate time slice as 2000–20. This is because specific warming levels of less than 1.5°C were not defined in the HELIX methodology.
f. CESM large-ensemble and low-warming simulations
The Community Earth System Model (CESM) has computed a large ensemble (CESM-LE) of historical and RCP8.5 simulations following the CMIP5 protocol (Kay et al. 2015). In addition, targeted low-warming simulations with the same model (LowWarm) (Sanderson et al. 2017b), were run for 2006–2100. These simulations use tailored emissions pathways to achieve stabilized climate at 1.5° or 2°C by 2100. The LowWarm simulations branch from a subset (11) of the CESM-LE historical simulations and so can be considered as continuous simulations from 1920 to 2100.
For calculating the warming since preindustrial, we note that one of the historical simulations starts at 1850 but the rest start at 1920, so a base period of 1920–40 was used to calculate the warming since preindustrial for each simulation. To keep consistency with other datasets, the warming between 1861–1900 and 1920–40 from the longer simulation was added to the warming amount relative to the 1920–40 base period.
Comparing the CESM-LE and LowWarm simulations allows a quantification of the difference caused by the experimental design for a given model structure.
g. CanESM2 large ensembles
The CanESM2 model (Arora et al. 2011) also has a large ensemble of coupled model simulations. These were created by branching from the CMIP5 historical simulations at 1950, with different simulations produced by using different random number seed values in the cloud parameterization (Kirchmeier-Young et al. 2017). Historical simulations were run from 1950 to 2005 and then continued using RCP8.5 forcing from 2006 to 2100. To determine the global mean warming since preindustrial conditions for the CanESM2 large ensemble simulations, these simulations were extended back to 1861 by the corresponding CMIP5 simulations.
The atmospheric component of the CanESM2 model was also used in the HAPPI project. This allows an estimate of influence of the experimental design between HAPPI and CMIP5, although this also includes the difference between a coupled atmosphere–ocean model and an atmosphere-only model.
REFERENCES
Allen, M. R., and W. J. Ingram, 2002: Constraints on future changes in climate and the hydrologic cycle. Nature, 419, 228–232, https://doi.org/10.1038/nature01092.
Arora, V. K., and Coauthors, 2011: Carbon emission limits required to satisfy future representative concentration pathways of greenhouse gases. Geophys. Res. Lett., 38, L05805, https://doi.org/10.1029/2010GL046270.
Bellprat, O., V. Guemas, F. Doblas-Reyes, and M. G. Donat, 2019: Towards reliable extreme weather and climate event attribution. Nat. Commun., 10, 1732, https://doi.org/10.1038/s41467-019-09729-2.
Betts, R. A., and Coauthors, 2018: Changes in climate extremes, fresh water availability and vulnerability to food insecurity projected at 1.5°C and 2°C global warming with a higher-resolution global climate model. Philos. Trans. Roy. Soc., 376A, 20160452, https://doi.org/10.1098/rsta.2016.0452.
Brunner, L., A. G. Pendergrass, F. Lehner, A. L. Merrifield, R. Lorenz, and R. Knutti, 2020: Reduced global warming from CMIP6 projections when weighting models by performance and independence. Earth Syst. Dyn., 11, 995–1012, https://doi.org/10.5194/esd-11-995-2020.
Cochran, W. G., 1937: Problems arising in the analysis of a series of similar experiments. Suppl. J. Roy. Stat. Soc., 4, 102–118, https://doi.org/10.2307/2984123.
Collins, M., and Coauthors, 2013: Long-term climate change: Projections, commitments and irreversibility. Climate Change 2013: The Physical Science Basis, T. F. Stocker et al., Eds., Cambridge University Press, 1029–1136.
DerSimonian, R., and N. Laird, 1986: Meta-analysis in clinical trials. Control. Clin. Trials, 7, 177–188, https://doi.org/10.1016/0197-2456(86)90046-2.
Deser, C., A. Phillips, V. Bourdette, and H. Teng, 2012: Uncertainty in climate change projections: The role of internal variability. Climate Dyn., 38, 527–546, https://doi.org/10.1007/s00382-010-0977-x.
Deser, C., and Coauthors, 2020: Insights from Earth system model initial-condition large ensembles and future prospects. Nat. Climate Change, 10, 791, https://doi.org/10.1038/s41558-020-0854-5.
Donlon, C., M. Martin, J. Stark, J. Roberts-Jones, E. Fiedler, and W. Wimmer, 2012: The Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA) system. Remote Sens. Environ., 116, 140–158, https://doi.org/10.1016/j.rse.2010.10.017.
Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 1937–1958, https://doi.org/10.5194/gmd-9-1937-2016.
Fischer, E. M., J. Sedláček, E. Hawkins, and R. Knutti, 2014: Models agree on forced response pattern of precipitation and temperature extremes. Geophys. Res. Lett., 41, 8554–8562, https://doi.org/10.1002/2014GL062018.
Fischer, E. M., U. Beyerle, C. F. Schleussner, A. D. King, and R. Knutti, 2018: Biased estimates of changes in climate extremes from prescribed SST simulations. Geophys. Res. Lett., 45, 8500–8509, https://doi.org/10.1029/2018GL079176.
Good, P., B. B. Booth, R. Chadwick, E. Hawkins, A. Jonko, and J. A. Lowe, 2016: Large differences in regional precipitation change between a first and second 2 K of global warming. Nat. Commun., 7, 13667, https://doi.org/10.1038/ncomms13667.
Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 1095–1108, https://doi.org/10.1175/2009BAMS2607.1.
Hawkins, E., and R. Sutton, 2011: The potential to narrow uncertainty in projections of regional precipitation change. Climate Dyn., 37, 407–418, https://doi.org/10.1007/s00382-010-0810-6.
He, J., and B. Soden, 2016: The impact of SST biases on projections of anthropogenic climate change: A greater role for atmosphere-only models? Geophys. Res. Lett., 43, 7745–7750, https://doi.org/10.1002/2016GL069803.
Held, I. M., M. Winton, K. Takahashi, T. Delworth, F. Zeng, and G. K. Vallis, 2010: Probing the fast and slow components of global warming by returning abruptly to preindustrial forcing. J. Climate, 23, 2418–2427, https://doi.org/10.1175/2009JCLI3466.1.
IPCC, 2018: Impacts of 1.5°C global warming on natural and human systems. Global Warming of 1.5°C: An IPCC Special Report on the Impacts of Global Warming of 1.5°C above Pre-industrial Levels and Related Global Greenhouse Gas Emission Pathways, in the Context of Strengthening the Global Response to the Threat of Climate Change, Sustainable Development, and Efforts to Eradicate Poverty. V. Masson-Delmotte et al., Eds., IPCC, 175–311.
Iturbide, M., and Coauthors, 2020: An update of IPCC climate reference regions for subcontinental analysis of climate model data: Definition and aggregated datasets. Earth Syst. Sci. Data, 12, 2959–2970, https://doi.org/10.5194/essd-12-2959-2020.
James, R., R. Washington, C.-F. Schleussner, J. Rogelj, and D. Conway, 2017: Characterizing half-a-degree difference: A review of methods for identifying regional climate responses to global warming targets. Wiley Interdiscip. Rev.: Climate Change, 8, e457, https://doi.org/10.1002/wcc.457.
Kay, J. E., and Coauthors, 2015: The Community Earth System Model (CESM) large ensemble project: A community resource for studying climate change in the presence of internal climate variability. Bull. Amer. Meteor. Soc., 96, 1333–1349, https://doi.org/10.1175/BAMS-D-13-00255.1.
King, A. D., D. J. Karoly, and B. J. Henley, 2017: Australian climate extremes at 1.5°C and 2°C of global warming. Nat. Climate Change, 7, 412–416, https://doi.org/10.1038/nclimate3296.
King, A. D., R. Knutti, P. Uhe, D. M. Mitchell, S. C. Lewis, J. M. Arblaster, and N. Freychet, 2018: On the linearity of local and regional temperature changes from 1.5°C to 2°C of global warming. J. Climate, 31, 7495–7514, https://doi.org/10.1175/JCLI-D-17-0649.1.
King, A. D., T. P. Lane, B. J. Henley, and J. R. Brown, 2020: Global and regional impacts differ between transient and equilibrium warmer worlds. Nat. Climate Change, 10, 42–47, https://doi.org/10.1038/s41558-019-0658-7.
Kirchmeier-Young, M. C., F. W. Zwiers, and N. P. Gillett, 2017: Attribution of extreme events in Arctic sea ice extent. J. Climate, 30, 553–571, https://doi.org/10.1175/JCLI-D-16-0412.1.
Knutti, R., 2010: The end of model democracy? Climatic Change, 102, 395–404, https://doi.org/10.1007/s10584-010-9800-2.
Knutti, R., and J. Sedláček, 2013: Robustness and uncertainties in the new CMIP5 climate model projections. Nat. Climate Change, 3, 369–373, https://doi.org/10.1038/nclimate1716.
Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2010: Challenges in combining projections from multiple climate models. J. Climate, 23, 2739–2758, https://doi.org/10.1175/2009JCLI3361.1.
Knutti, R., D. Masson, and A. Gettelman, 2013: Climate model genealogy: Generation CMIP5 and how we got there. Geophys. Res. Lett., 40, 1194–1199, https://doi.org/10.1002/grl.50256.
Lehner, F., C. Deser, N. Maher, J. Marotzke, E. M. Fischer, L. Brunner, R. Knutti, and E. Hawkins, 2020: Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6. Earth Syst. Dyn., 11, 491–508, https://doi.org/10.5194/esd-11-491-2020.
Manabe, S., R. J. Stouffer, M. J. Spelman, and K. Bryan, 1991: Transient responses of a coupled ocean–atmosphere model to gradual changes of atmospheric CO2. Part I: Annual mean response. J. Climate, 4, 785–818, https://doi.org/10.1175/1520-0442(1991)004<0785:TROACO>2.0.CO;2.
Merrifield, A. L., L. Brunner, R. Lorenz, I. Medhaug, and R. Knutti, 2020: An investigation of weighting schemes suitable for incorporating large ensembles into multi-model ensembles. Earth Syst. Dyn., 11, 807–834, https://doi.org/10.5194/esd-11-807-2020.
Mitchell, D., R. James, P. M. Forster, R. A. Betts, H. Shiogama, and M. Allen, 2016: Realizing the impacts of a 1.5°C warmer world. Nat. Climate Change, 6, 735–737, https://doi.org/10.1038/nclimate3055.
Mitchell, D., and Coauthors, 2017: Half a degree additional warming, prognosis and projected impacts (HAPPI): Background and experimental design. Geosci. Model Dev., 10, 571–583, https://doi.org/10.5194/gmd-10-571-2017.
Murphy, J., and Coauthors, 2019: UKCP18 land projections: Science report. Met Office Tech. Rep., 191 pp., https://www.metoffice.gov.uk/pub/data/weather/uk/ukcp18/science-reports/UKCP18-Land-report.pdf.
O’Neill, B. C., and Coauthors, 2016: The Scenario Model Intercomparison Project (ScenarioMIP) for CMIP6. Geosci. Model Dev., 9, 3461–3482, https://doi.org/10.5194/gmd-9-3461-2016.
Pendergrass, A. G., F. Lehner, B. M. Sanderson, and Y. Xu, 2015: Does extreme precipitation intensity depend on the emissions scenario? Geophys. Res. Lett., 42, 8767–8774, https://doi.org/10.1002/2015GL065854.
Quinn, N., P. D. Bates, and M. Siddall, 2013: The contribution to future flood risk in the Severn Estuary from extreme sea level rise due to ice sheet mass loss. J. Geophys. Res. Oceans, 118, 5887–5898, https://doi.org/10.1002/jgrc.20412.
Sanderson, B. M., M. Wehner, and R. Knutti, 2017a: Skill and independence weighting for multi-model assessments. Geosci. Model Dev., 10, 2379–2395, https://doi.org/10.5194/gmd-10-2379-2017.
Sanderson, B. M., and Coauthors, 2017b: Community climate simulations to assess avoided impacts in 1.5 and 2°C futures. Earth Syst. Dyn., 8, 827–847, https://doi.org/10.5194/esd-8-827-2017.
Seneviratne, S., and Coauthors, 2018: The many possible climates from the Paris Agreement’s aim of 1.5°C warming. Nature, 558, 41–49, https://doi.org/10.1038/s41586-018-0181-4.
Sherwood, S. C., S. Bony, and J.-L. Dufresne, 2014: Spread in model climate sensitivity traced to atmospheric convective mixing. Nature, 505, 37–42, https://doi.org/10.1038/nature12829.
Shiogama, H., S. Emori, N. Hanasaki, M. Abe, Y. Masutomi, K. Takahashi, and T. Nozawa, 2011: Observational constraints indicate risk of drying in the Amazon basin. Nat. Commun., 2, 253, https://doi.org/10.1038/ncomms1252.
Sutton, R. T., 2019: Climate science needs to take risk assessment much more seriously. Bull. Amer. Meteor. Soc., 100, 1637–1642, https://doi.org/10.1175/BAMS-D-18-0280.1.
Taylor, K., R. L. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485–498, https://doi.org/10.1175/BAMS-D-11-00094.1.
Tebaldi, C., J. M. Arblaster, and R. Knutti, 2011: Mapping model agreement on future climate projections. Geophys. Res. Lett., 38, L23701, https://doi.org/10.1029/2011GL049863.
Tokarska, K. B., M. B. Stolpe, S. Sippel, E. M. Fischer, C. J. Smith, F. Lehner, and R. Knutti, 2020: Past warming trend constrains future warming in CMIP6 models. Sci. Adv., 6, eaaz9549, https://doi.org/10.1126/sciadv.aaz9549.
Uhe, P., D. Mitchell, P. Bates, C. Sampson, A. Smith, and A. Islam, 2019: Enhanced flood risk with 1.5°C global warming in the Ganges-Brahmaputra-Meghna basin. Environ. Res. Lett., 14, 074031, https://doi.org/10.1088/1748-9326/ab10ee.
Wallemacq, P., and R. House, 2018: Economic losses, poverty & disasters: 1998–2017. Centre for Research on the Epidemiology of Disasters (CRED) and United Nations Office for Disaster Risk Reduction (UNISDR) Tech. Rep., 31 pp., https://www.preventionweb.net/go/61119.
Wyser, K., G. Strandberg, J. Caesar, and L. Gohar, 2017: Documentation of changes in climate variability and extremes simulated by the HELIX AGCMs at the 3 SWLs and comparison to changes in equivalent SST/SIC low-resolution CMIP5 projections. HELIX, https://helixclimate.eu/working-packages/high-resolution-timeslices-and-regional-downscaling-wp3.
Yokohata, T., and Coauthors, 2013: Reliability and importance of structural diversity of climate model ensembles. Climate Dyn., 41, 2745–2763, https://doi.org/10.1007/s00382-013-1733-9.
Zappa, G., P. Ceppi, and T. G. Shepherd, 2020: Time-evolving sea-surface warming patterns modulate the climate change response of subtropical precipitation over land. Proc. Natl. Acad. Sci. USA, 117, 4539–4545, https://doi.org/10.1073/pnas.1911015117.
Zelinka, M. D., T. A. Myers, D. T. McCoy, S. Po-Chedley, P. M. Caldwell, P. Ceppi, S. A. Klein, and K. E. Taylor, 2020: Causes of higher climate sensitivity in CMIP6 models. Geophys. Res. Lett., 47, e2019GL085782, https://doi.org/10.1029/2019GL085782.