The Colorado River basin (CRB) supplies water to approximately 40 million people and is essential to hydropower generation, agriculture, and industry. In this study, a monthly water balance model is used to compute hydroclimatic water balance components (i.e., potential evapotranspiration, actual evapotranspiration, and runoff) for the period 1901–2014 across the entire CRB. The time series of monthly runoff is aggregated to compute water-year runoff and then used to identify drought periods in the basin. For the 1901–2014 period, eight basinwide drought periods were identified. The driest drought period spanned years 1901–04, whereas the longest drought period occurred during 1943–56. The eight droughts were primarily driven by winter precipitation deficits rather than warm temperature anomalies. In addition, an analysis of prehistoric drought for the CRB—computed using tree-ring-based reconstructions of the Palmer drought severity index—indicates that during some past centuries drought frequency was higher than during the twentieth century and that some centuries experienced droughts that were much longer than those during the twentieth century. More frequent or longer droughts than those that occurred during the twentieth century, combined with continued warming associated with climate change, may lead to substantial future water deficits in the CRB.
The Colorado River basin (CRB) encompasses approximately 246 000 km2 and includes parts of seven states in the western United States (i.e., Arizona, California, Colorado, New Mexico, Nevada, Utah, and Wyoming) and northern Mexico (U.S. Bureau of Reclamation 2012). The CRB is one of the most important water supplies in the western United States and provides water for approximately 40 million people, hydropower generation, and over 16 000 km2 of agriculture (Evenson et al. 2018). Notably, almost 90% of the runoff in the CRB is generated in the upper Colorado River basin (UCRB) above the gauge at Lees Ferry (Figure 1).
Managing CRB water supplies during persistent drought (i.e., multiyear drought) is a substantial challenge for water managers. Persistent drought creates a balancing act between water for human use and water needed to maintain the health of ecosystems. Recently, Woodhouse et al. (2016) examined the influence of precipitation, temperature, and antecedent soil moisture on UCRB water year (1 October–30 September of the following year) streamflow over the past 100 years. Results indicated that cool season precipitation explained most of the variability in water-year streamflow; however, spring/summer temperatures and, to a lesser extent, antecedent fall soil moisture, also appeared to have substantial effects on UCRB streamflow under certain conditions. For example, recent droughts have been amplified by increased temperature, which exacerbated the effects of relatively modest precipitation deficits (Woodhouse et al. 2016).
In another recent study, Udall and Overpeck (2017) examined the effect of temperature on UCRB streamflow during the twentieth and twenty-first centuries, and for future projections from climate models. Findings indicated that between 2000 and 2014, annual average UCRB streamflow was 19% below that of the twentieth-century (i.e., 1906–99) mean, resulting in this period ranking as the worst 15-yr drought on record. They attribute at least one-third of the decrease in UCRB streamflow during this drought to increased temperature. Similarly, McCabe et al. (2017) reported that, since the late 1980s, increases in temperature have caused a substantial reduction in UCRB runoff efficiency (the ratio of streamflow to precipitation). The fraction of temperature-induced reductions in flow are shown to be the largest documented temperature-driven flow reductions since record keeping began. McCabe et al. (2017) also indicated that the increases in UCRB temperature over the past three decades have resulted in a mean decrease in UCRB streamflow of −1306 million cubic meters per year (or 7% of mean annual streamflow).
Xiao et al. (2018) performed several experiments using the Variable Infiltration Capacity hydrologic model to examine the cause of UCRB runoff declines during 1916–2014. Xiao et al. (2018) concluded that over half of the long-term decrease in runoff was associated with increases in temperature. Hoerling et al. (2019) used climate models to examine the decline in UCRB streamflow and concluded that long-term declines in precipitation were the primary cause of the declines in UCRB runoff. However, they also reported that increases in temperature accounted for about one-third of the century-long decline in UCRB runoff.
Most recently, Milly and Dunne (2020) examined the effects of warming on UCRB streamflow and reported that annual mean UCRB streamflow has been decreasing by 9.3% per degree Celsius of warming because of increased evapotranspiration, primarily driven by snow loss and a subsequent reduction in surface albedo. Milly and Dunne (2020) further stated that projected precipitation increases likely will not be sufficient to mitigate the robust, thermodynamically induced drying and that there is an increasing risk of severe water shortages in the CRB.
Although there is not complete agreement on the relative magnitudes of the effects of precipitation and temperature on long-term declines in CRB streamflow, all studies indicate that warming temperature is playing a progressively important role in reductions of runoff during drought. Water supply in the CRB is increasingly stressed by warming temperatures that have decreased winter snowpack (Mote et al. 2018), resulted in earlier snowmelt timing (Stewart et al. 2004), and increased evaporation losses in the basin (McCabe et al. 2017; Udall and Overpeck 2017; Milly and Dunne 2020). Furthermore, reconstructions of past climate and streamflow in the CRB indicate that the past century was one of the wettest centuries of the past two millennia, and the more “normal” condition for the basin is drier than the past century. Warming-induced streamflow declines coupled with a potential natural shift to drier conditions with long persistent droughts would result in severely decreased water supply in the CRB (Gray et al. 2004; Meko et al. 2007; Woodhouse et al. 2010). In addition to changes in climate, the basin surface water resources are under increased stress due to consumptive use, which is now approximately equivalent to the twentieth-century mean annual flow (U.S. Bureau of Reclamation 2012; Lall et al. 2018). The combined effect of increased water use, increases in temperature related to global warming, and potential natural shifts to drier conditions suggests a high likelihood of future water supply shortages in the CRB.
To help water managers better manage future water supply in the CRB, it is important to examine CRB drought from many scientific perspectives. Several previous studies have focused on drought primarily in terms of variability in water supply (i.e., streamflow) in the UCRB only, as that is the source region of most CRB water. In this study, rather than focus on the UCRB or lower Colorado River basin (LCRB) separately, we examine drought in the CRB from a basinwide perspective. We also examine CRB drought from a climatic (or atmospheric) supply and demand perspective of relative dryness, in addition to the more traditional assessment of Colorado River flow variability. For this analysis, we are interested in relative dryness stemming from negative precipitation anomalies and/or positive anomalies of climatic (or atmospheric) water demand (i.e., potential evapotranspiration) across the entire CRB. An examination of CRB drought from this point of view has importance not only for understanding natural water supply deficits, but also has implications for understanding changing drought intensity and consequently the potential effects on ecosystem health. The primary objectives of this study are to 1) identify drought events in the CRB during the instrumental period (i.e., 1901–2014), 2) determine the climatic causes of CRB drought events (i.e., precipitation and temperature anomalies), and 3) place the drought events identified for the instrumental period in the context of the past 1700 years.
2. Data and methods
2.1. Monthly temperature and precipitation data
Monthly temperature and precipitation data were obtained from the Parameter–Elevation Regressions on Independent Slopes Model (PRISM) (Oregon State University PRISM Climate Group; http://www.prism.oregonstate.edu) for the period 1895–2014. These data are provided at a 4-km resolution for grid cells across the conterminous United States (CONUS) and were aggregated to 146 U.S. Geological Survey eight-digit hydrologic units (HUs) in the CRB (Figure 1). The monthly temperature and precipitation data are used as inputs to a monthly water balance model to estimate time series of monthly runoff.
Because of the sparseness of meteorological data before about 1950, concern has been raised about the use of PRISM monthly temperature and precipitation data for the early part of the twentieth century (Gibson et al. 2002; McAfee et al. 2018). However, time series of the monthly PRISM temperature and precipitation data have been used in several previous analyses of runoff for sites across the CONUS, including the CRB (McCabe and Wolock 2011; Gangopadhyay et al. 2015). The use of the monthly PRISM temperature and precipitation data in these previous analyses indicated that the PRISM data were useful to simulate runoff for periods starting as far back as the beginning of the twentieth century.
2.2. The water balance model
For this analysis, variability in runoff is used as an indicator of relative wetness and dryness across the CRB. Runoff is the climate-driven water supply to an area that is in excess of the evaporative demand (or climatic demand) for water. Runoff in this study was simulated from temperature and precipitation data for the 146 HUs in the CRB using a monthly water balance model. Simulated runoff from the water balance model is used in the analysis rather than measured runoff because 1) complete records of measured runoff for the CRB are limited in both time and space, 2) the water balance model provides a suite of water balance components (e.g., potential evapotranspiration, actual evapotranspiration, snow water equivalent, soil moisture storage, and runoff) that can be interpreted in terms of hydroclimatic factors, many of which are not quantified at many locations or for long time periods, and 3) the water balance model provides estimates of natural flow (or runoff), whereas measured values may be substantially altered by human influences (e.g., dams, diversions, consumptive water use). Details describing the water balance model, its limitations, and goodness-of-fit measures can be found in McCabe and Wolock (2011).
Soil-moisture storage capacity for each HU was computed using the available water-capacity values from the State Soil Geographic Database (STATSGO) dataset and assuming a 1-m rooting depth [STATSGO soil data are available online (https://www.nrcs.usda.gov/wps/portal/nrcs/site/national/home/)].
Monthly temperature and precipitation for the 146 HUs in the CRB were used as inputs to the water balance model and monthly runoff was computed for each HU. Although the PRISM temperature and precipitation data record begins in 1895, the first few years (i.e., 1895–1900) of the water balance model simulations are not analyzed so that the effects of prescribed initial conditions are minimized in the analyses. Thus, for the analyses presented in this study, water balance model simulations for the period 1901–2014 are used. The water balance model is initialized with soil moisture storage at capacity, snow storage equal to zero, and surplus water equal to zero. Additionally, we use a standard version of the water balance model (McCabe and Wolock 2011) and do not calibrate the model to measured runoff data. Monthly water balance estimates of potential evapotranspiration, actual evapotranspiration, and runoff for the PRISM 4-km grid cells for the CONUS are available in a U.S. Geological Survey data release (Wolock and McCabe 2018).
2.3. Water balance model verification
The water balance model has been evaluated and applied in many previous studies (McCabe and Wolock 2008, 2011). These studies include uses of the monthly water balance model for sites across the United States and the globe with a range of climatic and physiographic conditions. Nonetheless, we performed a separate verification of the water balance model for this study. For this verification, we compared monthly runoff estimated by the water balance model for the UCRB, the LCRB, and the entire CRB with measured monthly runoff. [The monthly measured runoff data for the 146 HUs in the CRB were obtained online (http://waterwatch.usgs.gov.)] Owing to limited runoff observations prior to the early 1950s, this verification was carried out for the years 1951–2014.
The simulated runoff represents natural runoff resulting from climatic variability, whereas the measured runoff includes climatic variability as well as anthropogenic effects on runoff such as the effects of dams, reservoir operations, and consumptive water use. Correlations between the monthly simulated runoff and monthly measured runoff for water years 1951–2014 for the UCRB, LCRB, and CRB ranged from 0.86 [significant at a 99% confidence level (p < 0.01)] to 0.92 (p < 0.01) (Figure 2). The biases (mean monthly simulated runoff minus mean monthly measured runoff ranged from −24 to 14 mm (Figure 2). However, negative bias in the UCRB and positive bias in the LCRB offset each other so that bias for the CRB (the entire CRB) is near zero (Figure 2c). Although there are biases in the water balance estimates of runoff, correlations between the simulated runoff and measured runoff indicate that the water balance model reasonably simulates the temporal variability of measured runoff.
Differences between measured and simulated runoff are generally less than expected, given that simulated runoff does not include anthropogenic influences on measured runoff (Figure 2). This result may be due to the relative magnitudes of anthropogenic effects on measured runoff compared to the climate-driven effects of evapotranspiration. For example, in the CRB, mean annual actual evapotranspiration is approximately 183 000 million cubic meters (mcm) (estimated using the water balance model), whereas mean annual water use is approximately 20 000 mcm (water use was estimated from values presented by Maupin et al. 2018); thus actual evapotranspiration is more than 9 times larger than water use in the UCRB. As a result, water balance estimates of runoff (natural runoff) are closer to measured runoff than might be expected because anthropogenic effects, such as water use, on runoff are small compared to the natural evaporative demand.
2.4. Additional data
Drought periods in the CRB identified for the instrumental period (i.e., 1901–2014) were compared with drought periods in previous centuries based on summer (June–August) Palmer drought severity index (PDSI) values for 292 grid points in the CRB that were reconstructed from tree rings (Cook et al. 2010). Summer PDSI reconstructions for the period 300–2005 were obtained using the Cook et al. (2010) Living Blended Drought Atlas (LBDA) (https://www.ncdc.noaa.gov/paleo-search/study/19119). Note that the PDSI values for these time series are computed from measured data after 1978. The PDSI is computed from monthly temperature and precipitation data to estimate relative dryness or wetness. It is a standardized index that ranges from −10 (dry) to +10 (wet). Because of the substantial monthly autocorrelation in the PDSI, and the tree-growth response to seasonal climate in the CRB, this measure largely reflects cool season moisture variability in this region (St. George et al. 2010). Water-year runoff is not the same metric as summer PDSI, but because both are largely a reflection of cool season precipitation, the reconstructed CRB PDSI may provide some insights on the twentieth and twenty-first-century droughts in a long-term context.
PDSI data for 300–2005 were used because all 292 grid points in the CRB from the Cook et al. (2010) dataset have data for this period of time. It should be noted that the uncertainty of reconstructed PDSI values increases with the age of the chronology (i.e., the further back in time), in part, due to fewer tree-ring chronologies on which to base the PDSI estimates.
2.5. Defining drought
The first step in identifying droughts in the CRB was to standardize the time series of water-year runoff (in millimeters) for each of the 146 HUs so that HUs with higher water-year runoff than other HUs would not overly influence the identification of basinwide dry years and subsequently basinwide drought events. The standardization was accomplished by converting the time series of water-year runoff to percentiles. After converting the time series of water-year runoff for the 146 HUs to percentiles, a mean time series of percentiles for all 146 HUs was computed to generate a mean time series for the CRB (1901–2014). This time series of mean CRB percentiles of water-year runoff was used to identify drought periods. The 292 PDSI time series for the CRB also were converted to percentiles and then averaged to compute a time series of mean PDSI percentiles for the CRB for the year 300–2005 period.
Time series of mean CRB runoff percentiles and mean CRB PDSI percentiles were used as indicators of basinwide CRB dryness and wetness. To evaluate the reliability of these mean time series to represent the variability of CRB runoff and CRB PDSI, the percent variance explained by the first component (PC1) from principal components analyses of the 146 CRB runoff percentile time series and of the 292 CRB PDSI percentile time series for the period 1901–2005 were computed. The 1901–2005 period was used because this period is common to both the CRB runoff data and the CRB PDSI data. The variance explained by PC1 was computed because PC1 generally represents the mean time series for the data examined. The explained variance for PC1 for the CRB runoff percentile time series was ~50%, and the explained variance for PC1 for the CRB PDSI percentile time series was ~78%. Additionally, the component loadings for PC1 of the 146 CRB runoff percentile time series had a median value of 0.74 (p < 0.01) and 25th- and 75th-percentile values of 0.64 (p < 0.01) and 0.78 (p < 0.01), respectively. For PC1 of the 292 CRB PDSI percentile time series, the median loading was 0.91 (p < 0.01) with a 25th percentile of 0.85 (p < 0.01) and a 75th percentile of 0.94 (p < 0.01).
The large percentages of explained variance by PC1 for the CRB runoff and CRB PDSI data suggest that mean time series of CRB runoff percentiles and CRB PDSI percentiles provide a reliable representation of temporal variability of basinwide runoff and PDSI. The Pearson correlation between PC1 of the 146 CRB water-year runoff percentile time series and the mean time series of water-year CRB runoff percentiles is 0.997 (p < 0.01), and the Pearson correlation between PC1 of the 292 CRB PDSI percentile time series and the mean time series of CRB PDSI percentiles is 1.000 (p < 0.01).
There are many ways to quantify drought, and there is no one “best” or “correct” way to define it. The definition of drought often is influenced by the application of the drought information. The definition we use here resulted, in part, from discussions with the U.S. Bureau of Reclamation and is similar to one of the drought definitions that the U.S. Bureau of Reclamation will use in the SECURE Water 2021 report to the U.S. Congress.
In this study, a drought event was defined according to the following “rules”:
To initiate a drought, one year of below median water-year runoff is required.
There must be at least two consecutive years with below median runoff within a drought period.
There can be one intervening year with above median runoff.
To end a drought, two or more years of above median runoff are required.
The total length of a drought period must be at least three years in length.
Drought events were characterized by duration and severity (severity is computed as the mean runoff percentile over a drought period). In addition, mean water-year CRB temperature, precipitation, and runoff departures (in millimeters) were computed for each drought.
3. Results and discussion
3.1. Drought events
Using the drought definition described above, eight drought events in the CRB were identified for 1901–2014 (Figure 3). Three of these eight droughts correspond with some of the most well-known droughts in the United States: 1928–36 (the 1930s Dust Bowl), 1943–56 (the 1950s drought), and 2000–09 (the turn-of-the-century drought). The longest of the eight droughts was from 1943 through 1956 (14 years), and the shortest drought was from 1989 through 1991 (3 years) (Figure 3, Table 1). Although only 4 years in length, the coolest, driest, and most severe of the eight drought events (based on precipitation and runoff departures) spanned water years 1901–04 (Figure 3, Table 1). The warmest drought event, and one of the least severe in terms of mean precipitation and runoff departures, occurred from 2000 to 2009 (Figure 3, Table 1). Spearman’s rank correlation between the precipitation and runoff departures for the years covered by the eight droughts (Figure 3, Table 1) is 0.73 (p = 0.041), whereas the rank correlation between temperature and runoff departures for the eight droughts is only 0.02 (p = 0.955). These correlations indicate that precipitation deficits have been the principal climatic driver of the eight droughts (defined by mean percentiles of water-year runoff for the CRB) within the historic observational record.
To examine the magnitude and direction of monthly precipitation and temperature departures during the eight drought events, we computed mean monthly precipitation and temperature departures for each drought. The mean monthly precipitation and temperature departures (12 mean monthly values for each variable) for the eight droughts provided us with a total of 96 mean monthly temperature and precipitation departures to assess. Results of this analysis indicate negative mean monthly precipitation departures for most months during each of the droughts (Figure 4). Of the 96 mean monthly precipitation departures for the eight droughts, 72 (75%) indicate a negative precipitation departure. In contrast, mean monthly temperature departures were both above and below the long-term monthly means (Figure 4), with half of the 96 mean monthly temperature departures being positive. However, for the 2000–09 drought event, temperature departures were positive for all months (Figure 4p). This result is consistent with the findings of Woodhouse et al. (2016), Udall and Overpeck (2017), McCabe et al. (2017), and Milly and Dunne (2020) who suggest that recent drought conditions in the UCRB are associated with increases in temperature.
Figure 5 illustrates the spatial patterns of mean percentiles of water-year runoff for each of the eight CRB drought events. The 1901–04 drought event (Figure 5a) was characterized by mean runoff percentiles below 30 for most of the CRB. For the other seven CRB drought events, the spatial patterns of drought conditions (i.e., low percentiles of mean CRB water-year runoff) are variable. For example, for the 1943–56 and 2000–09 drought events, the most widespread and driest conditions are primarily focused in the LCRB (Figures 5d,h), whereas for the 1959–64 and 1989–91 drought events, the driest conditions appear to have been largely focused in the UCRB (Figures 5e,g). For the 1928–36 drought, the lowest percentiles of runoff appear to have been on the west side of both the LCRB and UCRB (Figure 5c).
3.2. Independent effects of temperature and precipitation on drought
Given that temperature has increased in the UCRB (Udall and Overpeck 2017; McCabe et al. 2017; Milly and Dunne 2020), one of the common questions from water managers concerns the magnitude of the effects of temperature on runoff and drought in the UCRB. To examine the independent effects of temperature and precipitation on runoff and drought in the CRB, we performed experiments with the water balance model. We ran the water balance model for the CRB in three ways: 1) a simulation using varying monthly temperature and precipitation (the complete model), 2) a simulation using varying monthly precipitation and the mean monthly climatology of temperature (the Pvar model, in which the same 12 mean monthly temperature values are used for each year), and 3) a simulation using varying monthly temperature and the mean monthly climatology of precipitation (the Tvar model, in which the same 12 mean monthly precipitation values are used for each year). Comparing output from the complete, Pvar, and Tvar models allows the examination of the individual effects of temperature and precipitation on runoff. For each version of the model (i.e., complete, Pvar, and Tvar) we computed the mean water-year runoff departure from the long-term mean of water-year runoff for each drought period (Figure 6).
Results from the water balance experiments indicate that runoff departures simulated by the Pvar model are similar in magnitude to those simulated by the complete model for all drought events (Figures 6a,b). In contrast, the runoff departures for the Tvar model are generally small and much smaller in absolute magnitude than are the runoff departures for the complete and Pvar models. This comparison suggests that the runoff departures for the droughts (simulated by the complete model) were largely driven by precipitation departures. Additionally, some of the runoff departures simulated using the Tvar model are positive, likely as a result of cooler than average temperature for these drought events. The largest negative runoff departure for the Tvar model was for the recent 2000–09 drought, the warmest of the eight drought events (Figure 6c, Table 1). This result is consistent with results from previous studies (e.g., Udall and Overpeck 2017; McCabe et al. 2017) and suggests the possibility that temperature has increased to a point in the CRB that the negative effects of temperature on runoff (and drought) will become more apparent during future droughts.
3.3. CRB drought frequency, duration, and magnitude since 300 CE
Drought during the instrumental period (for this study 1901–2014) does not necessarily represent the long-term (i.e., multiple centuries) variability of climate and possible drought occurrence in the CRB. To more fully address CRB climate variability and drought occurrence, we examined drought in the CRB since the year 300 using tree-ring reconstructions of summer PDSI values for the CRB (Cook et al. 2010). We use PDSI data for 292 grid points that are located within the CRB. The time series of PDSI values were converted to percentiles just as was done with the water-year runoff data, and the 292 time series of PDSI percentiles were averaged to compute a mean time series of PDSI percentiles for the CRB. The time series of mean PDSI percentiles for the CRB was used to identify drought events just as was done with the time series of mean runoff percentiles (as explained earlier).
Although it would have been more straightforward to compare simulated runoff with a streamflow reconstruction for the Colorado River, the vast majority of those reconstructions focus on the UCRB. Studies that address flow in the LCRB have reconstructed flow for tributaries such as the Salt, Verde, and Gila Rivers (Smith and Stockton 1981; Meko and Graybill 1995) rather than for the main stem of the Colorado. Reconstructed PDSI is available for the basin as a whole, however. Percentiles of runoff and PDSI are reasonable metrics of water availability, both tending to be greater than 50 when precipitation exceeds potential evapotranspiration, and both are largely a reflection of cool season precipitation. Figure 7 illustrates a comparison of time series of mean percentiles of CRB water-year runoff and mean percentiles of CRB PDSI for 1901–2005 (the period common to both datasets). The Pearson correlation is 0.79 (p < 0.01) and indicates that runoff and PDSI strongly covary for the CRB. Additionally, because the PDSI time series are based on tree-ring reconstructions up through 1978, and then on measured data after 1978, we computed correlations between runoff and PDSI for the 1901–78 and the 1979–2005 periods separately. The Pearson correlation for 1901–78 is 0.79 (p < 0.01) and for 1979–2005 the correlation is 0.95 (p < 0.01). These correlations indicate that the variability of summer PDSI, whether reconstructed from tree rings or computed from measured data, can serve as a proxy for the variability of water-year CRB runoff providing some insights on the twentieth- and twenty-first-century droughts in a long-term context.
The long-term (300–2005) time series of mean percentiles of CRB PDSI indicate several periods of persistent dry conditions (multiple contiguous years with a PDSI percentile below 50) (Figure 8). To assess how similar the record of twentieth-century drought is in the runoff and PDSI records, droughts were identified, based on the rules described earlier, but using the years common to both series, 1901–2005. Drought events identified using the runoff and PDSI data are illustrated for this common time period in Figure 9. Both datasets resulted in the identification of eight drought events during 1901–2005. Because the two time series used to identify drought events are different, there are some differences in the occurrence, length, and severity of the respective eight drought events, however, there is more similarity between the drought events determined using the two datasets than there are differences (Figure 9).
The droughts determined using the runoff data include a total of 54 years during 1901–2005, whereas the droughts determined using the PDSI data include a total of 40 years. Additionally, 67% of the drought years determined using the runoff data are in common with drought years determined using the PDSI data, and 90% of the drought years determined using the PDSI data are in common with the drought years determined using the runoff data. The PDSI data appear useful to identify the occurrence and severity of drought events that are largely similar to those identified using the runoff data. However, for the 1901–2005 period, the droughts defined using PDSI generally appear to be shorter in duration than are the droughts defined using runoff. The tendency for droughts determined using PDSI to be shorter than droughts determined using runoff indicates that drought events determined using PDSI are conservative estimates of drought durations. Drought frequencies determined using PDSI, however, are generally similar to drought frequencies determined using runoff, or possibly biased to higher frequencies, because of the greater variability of PDSI relative to runoff (the standard deviations of percentiles of PDSI and runoff for the 1901–2005 period are 26 and 20, respectively).
Based on the multicentury PDSI time series for the period 300–1900, 108 drought events were identified (Figure 10). The length of the 108 drought events ranged from 3 to 28 years, with a median drought length of 6 years, and 25th- and 75th-percentile lengths of 5 and 9 years, respectively. The longest drought occurred from 1273–1300, which corresponds with the time period during which the Ancestral Puebloans migrated out of the Colorado River Plateau region (Benson et al. 2007). For comparison, drought duration computed using PDSI data for 1901–2005 ranged from 3 to 11 years. The maximum drought duration for the 1901–2005 period (11 years) is longer than the 75th percentile of drought durations during the 300–1900 period (9 years) but is much shorter than the longest drought (i.e., 28 years) indicated for 300–1900. Thus, there were some droughts during the 300–1900 period that were longer than the longest drought during the 1901–2005 period, but the longest drought during the 1901–2005 period was comparable to some of the longest droughts (75th-percentile droughts) during the 300–1900 period.
Drought severity (mean percentile value of water-year PDSI during a drought event) for the drought events during 300–1900 ranged from 7 (the driest drought) to 49, with a median percentile of 34, and 25th and 75th percentiles of 28 and 38, respectively (Figures 9 and 10). In contrast, for 1901–2005 the drought events identified using PDSI ranged from 20 (driest drought) to 40 (Figure 10). Thus, the driest drought during 1901–2005 had a mean percentile (i.e., 20) that was lower (drier) than the 25th percentile of the droughts during 300–1900 (i.e., 28), but not as dry as the driest drought during 300–1900 (i.e., mean percentile of 7). Additionally, the least severe drought during 1901–2005 had a mean percentile (i.e., 40) that was higher (slightly wetter) than the 75th percentile of drought events during 300–1900 (i.e., 40).
Overall, the CRB drought events during 1901–2005 had similar durations and severity to most of the drought events during 300–1900. However, the multicentury record (i.e., 300–1900) indicates that there have been CRB droughts of much longer duration and greater severity (drier) than those that occurred during the twentieth century and the beginning of the twenty-first century.
We also summarized the frequency, mean duration, and mean severity of drought events by century from 300 to 1999 using the reconstructed PDSI data (Figure 11). For these 17 centuries, drought frequency varied from a minimum of five drought events (sixth century) to a maximum of nine events (seventh and sixteenth centuries), with a median of six droughts in a century (Figure 11a). Thus, the six droughts during the most recent complete 100-yr period (i.e., twentieth century, circled dots in Figure 11) are equal to the median drought event frequency per century during the past 17 centuries.
Drought duration ranged from a minimum mean length of 5 years (for the sixteenth and twentieth centuries) to a maximum mean length of 10 years (twelfth century), with a median of 8 years (Figure 11b). The mean drought length for the twentieth century was 5 years, which is equal to the minimum mean length for the past 17 centuries.
Drought severity ranged from a minimum mean severity (most severe drought) of 23 (ninth century) to a maximum mean severity (least severe drought) of 36 (fourth century), with a median severity of 31 (Figure 11c). The mean drought severity for the twentieth century (i.e., 32) is just slightly higher (less severe) than the long-term median severity for the past 17 centuries.
Water-year CRB runoff estimated using a water balance model for 1901–2014 is used to identify basinwide drought events. We identified eight drought events for the CRB, including the 1930s Dust Bowl drought, the 1950s drought, and the 2000–09 turn-of-the-century drought. The driest drought occurred during 1901–04 when drought conditions were prevalent across almost the entire CRB, whereas the longest drought period occurred during 1943–56, which was similarly widespread as the 1901–04 drought, but less severe on average. An examination of the eight drought events indicated that negative precipitation departures have been the primary driver of CRB droughts. However, the 2000–09 CRB drought had the highest temperature departure of the eight drought events, and temperature clearly played an important role in reducing runoff during the turn-of-the-century drought. The greater effect of temperature on the most recent drought (i.e., 2000–09 drought) compared with other droughts may be an indication that temperature is beginning to have an increasingly important influence on CRB drought as the climate continues to warm.
Using reconstructions of PDSI for the CRB, we also examined the frequency, mean duration, and mean severity of drought events for each century from 300 through 1999. For the most part, the drought events during the instrumental period had similar durations and severities as those identified for the preinstrumental period. This result suggests that meteorological data for the drought events during the instrumental period can be used to develop experiments to examine the effects of drought events on water supply given current or projected water use and water management. However, the drought events identified from the preinstrumental period indicate that there have been centuries in the past with a higher frequency of droughts as well as periods of drought with longer durations than those that occurred during the instrumental period. A reoccurrence of more frequent or longer droughts, such as those identified for past centuries, in combination with continued warming associated with climate change, likely will have a substantial effect on surface water resources and soil moisture conditions in the CRB.
We thank Jesse Dickinson (U.S. Geological Survey, Arizona) and anonymous reviewers for comments that helped to improve the paper. Research support was provided through the U.S. Geological Survey Water and Land Resources Mission Areas, and the Southwest Climate Adaptation Science Center. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.
Data availability statement: Monthly data from the Parameter–Elevation Regressions on Independent Slopes Model (PRISM) are available from the PRISM Climate Group, Oregon State University, and the summer Palmer drought severity index reconstructions from the Cook et al. (2010) Living Blended Drought Atlas (LBDA) are available online (https://www.ncdc.noaa.gov/paleo-search/study/19119).