Drought is common and costly, inflicting billions of dollars of damage annually across the United States. Early warning information, such as climate forecasts and hydrologic data, can help decision makers prepare for drought and reduce drought impacts. But what information do drought managers actually use and need?

A survey of state drought managers1 in the 19 Western Governors' Association (WGA) states investigated drought concerns and impacts, the use and value of drought plans, and the types of early warning information that could inform decisions and reduce drought damages.2 This article provides results and insights from the survey, with recommendations for improving information and drought preparedness.3

Overall, managers are highly concerned about droughts, and expect them to become more frequent and severe. Across the West, drought damages ranged from an estimated millions of dollars to billions of dollars per year per state. All managers said that better early warning information could help reduce drought costs, with an average reduction of 33%.4 This confluence of widespread droughts, a high level of concern, substantial economic damages, and the potential to reduce impacts points to the importance of effective and usable early warning information.

DROUGHT PLANS, INDICATORS, AND TRIGGERS.

Drought plans, a component of drought preparedness, typically include indicators and triggers. Indicators are variables to define and characterize drought conditions.5 Triggers are specific values of indicators linked to the timing of management responses.6 Indicators can address questions such as, “How do we know it's a drought?” and “How severe is it?”; triggers can address the question, “When do we take action?”

All states but one had developed a drought plan. But 16 of these 18 states did not use the indicators in their plans, citing them as “useless” and “essentially ignored.” Managers elaborated: “We're just not following this”; “We got some emergency funding [to develop a plan], so we held a workshop, but we haven't really used the plan”; “How did we determine the indicators? It was a guess”; “We borrowed them from another state”; “We have no way of knowing if our indicators and triggers are any good.”

Most states selected their indicators randomly, “out of a hat,” without knowing whether they “worked.” Once selected, indicators suffered from neglect, stemming from lack of relevance and credibility, which led to lack of use and evaluation, which reinforced the problem of knowing whether they were effective.7

If indicators were actually implemented as specified in plans, they would provide confusing, if not conflicting, guidance. Plans typically listed multiple indicators, but levels to define drought were not statistically consistent. For instance, a “severe drought” probability is 6.7% for the Standardized Precipitation Index (SPI), 14% on average for the Surface Water Supply Index (SWSI), and 10% on average for the Palmer Drought Severity Index (PDSI),8 and even individual indicators, such as the latter two, are not statistically consistent across different time periods and locations. While different indicators would naturally reflect different levels during a drought, the inconsistency is the definition of that level. Also, most plans did not specify when a drought level would be invoked or revoked. And with multiple indicators, is it one, a majority, or all of the indicators? More generally, indicators were difficult for managers to decipher and understand in terms of “drought on the ground.” As one manager said, “What does a −1.5 index value really mean?”

Though drought means different things to different people, resulting in innumerable indicators and information needs, managers were remarkably consistent in describing desirable attributes of indicator information. Managers wanted to see a range of indicators, separated but all in one place, in comparable and consistent terms, scalable across different time periods and regions, and relative to historic drought conditions, from which they could “pick and choose” to assess, compare, and communicate drought conditions.

One approach that managers supported was percentiles (“great idea”; “you know, that would be really handy”), which would offer statistical consistency among different indicators and their different temporal and spatial scales, historical context, and clarity.9 Managers could also define their own levels of drought severity, based on percentiles, rather than using arbitrary index values. Percentile-based indicators would need to be distinct, rather than preweighted or combined, so that decision makers could see each one and tailor them to their specific needs.

The U.S. Drought Monitor evoked ambivalence among managers. While all were familiar with the product, 10 did not use it (“they should remove [our state] from the map”), six were mixed (“it's the only thing out there” but “it does not reflect the reality of drought”), and three rely heavily on it (“it does a great job”). Managers noted strengths, limitations, and recommendations. Strengths include “timely, done every week” and “attempts to integrate a number of factors into a convenient format.” Limitations include “lacks understanding about drought on the ground”; “the loudest voice gets the most attention”; and “not developed for the county level, but used for county-by-county drought determination.” Recommendations include “more people involved in the Drought Monitor; problem is, they don't quite get it right; need more voices . . . throughout the state”; “more regional and local information”; and “indicators disaggregated—and to a finer scale.”

Aside from indicators, plans nonetheless had value: “For detailing people involved”; “Mostly for agency coordination; who they need to contact”; “For having a system in place, rather than trying to create it on the fly”; “Great compilation of information—who does what—who has responsibility and functions.” But in the end, most plans were not regularly used, tested, or revised: “The plan is in dire need of updating; we do not use”; “Operational, no”; “Just a reporting mechanism”; “Used, then ignored.” One manager offered an exception: “We're continually updating, making notes in the margin during a drought.” Another manager quoted: “Plans are worthless, but planning is everything.”10

HARD OR SOFT TRIGGERS.

Of the 16 state plans, all with indicators, 8 had explicit triggers, and only 1 state used their triggers. Managers nonetheless had experience with and opinions about “hard triggers” (definite numerical thresholds for drought levels and associated actions) versus “soft triggers” (more subjective and nuanced assessments of drought). Managers saw merits in both approaches, especially a combined approach. As they explained:

“Numerical is critical—but on the ground expertise is also important”; “Hard numbers initiate action, and the action is to begin a dialogue”; “Hard to get arms around drought—it has to be informed by empirical information—then massaged”; “Can't use hard and fast indicators, drought is local”; “Hard triggers can justify actions, but your public can hold you more accountable than you'd like; why you did or did not take actions [to declare a drought].”

With either approach, managers needed “political cover” for drought decisions. Hard triggers offer a quantitative and justifiable basis for decision making, but numbers may not reflect the reality of drought. Soft triggers offer flexibility without being tied to numbers, but could make it more difficult to explain drought assessments or declarations.

On that point, drought declarations can be “both desired and feared,” depending on the perspective. “Municipalities hate a drought declaration, [it] affects revenues, water sales; nurseries hate a drought declaration, people aren't buying plants”; “A drought declaration may be based on crop losses—but people hear ‘drought’ and they don't come for white water rafting because of concerns about streamflow.” On the other hand, “declarations help those who seek federal assistance.”

LINKS BETWEEN STATE PLANS AND LOCAL PLANS.

In theory, one might imagine connections between state and local plans. In practice, state plans and local plans are often “completely unrelated.” Of the 19 states, 14 had local plans, but only one had substantive coordination between the state and local plans. One reason is that local plans are typically water conservation plans or water use restriction plans rather than drought plans per se. Another reason is that states feel they are not in a position to tell local utilities when to impose water use restrictions. “Local declaration could feed into state evaluation. But state declaration would not trump a local level plan.” A third reason is the sheer number of local utilities and the magnitude of coordination and communication. “We have 6,000 [local] plans—[a] huge problem—can't keep track of them all.”

States emphasized the importance of local capacity building. “We meet once a year with county committees, help them develop their own plans, especially those more vulnerable, [and] help local communities take ownership.” A major concern is “How do we assist small systems and rural communities? In our state, 95% of the drought impacts happen to 5% of the population.”

For understanding impacts, managers stressed the value of “field intelligence” and talking with local experts and stakeholders across the state. “We put a lot of weight on data from people on the ground”; “I rely on the network, I make a lot of phone calls, get a handle on what's on the ground”; “People are the most important [aspect of drought]. They are also the most ignored.”

EARLY WARNING INFORMATION.

If better early warning information can help reduce drought impacts, what information is needed? Managers offered a range of needs, with common themes.

Historical analogs.

“What we need is a statistical analysis of drought events as we're trending into a drought event [to] measure against historic drought”; “People say worst drought in memory, be able to quantify that”; “Is it going to be a short-term or longer drought?” “How is this year shaping up—is there a year in the past that looks similar?” Analogs can provide not only historical comparisons, but also a forecasting capability by comparing the current year to previous years that had similar conditions.

Monitoring data.

“Our job is to be out there in front, with good monitoring, so we can mitigate impacts—but we need more data points”; “Need better monitoring data—precipitation, soil moisture.” “More Snotel sites, more stream gauging stations, more soil moisture data.” Monitoring data can also serve as forecasts, both in terms of the indicator itself (e.g., snowpack can be a harbinger of future runoff), but also for developing trajectories of future conditions based on current states.

Forecasts.

Managers emphasized that “we need predictive information, especially seasonal to interannual predications” but “we don't use forecasts that are out there.” Reasons for nonuse varied, such as lack of clarity, relevancy, trust, or accuracy of forecasts: “Very confusing—I don't talk meteorologist talk”; “Forecasts are high altitude”; “Need greater level of trust [in forecasts] so it could be translated into . . . on-the-ground changes”; “Wish we could have more confidence [in the forecasts]—to ring the bell louder.”

Communication.

Managers need “information that's intuitive enough to operate without a manual”; “data—not just maps—the data underlying the maps”; “drought reports on same time schedule”; “finer resolution—pinpoint problems within the area”; “information presented in a way that makes sense to the public.”

NIDIS: USE, VALUE, AND NEED.

According to the National Integrated Drought Information System (NIDIS) Act of 2006, a primary goal of NIDIS is to provide an “effective drought early warning system . . . to make usable, reliable, and timely drought forecasts and assessments . . . to engender better informed and more timely decisions thereby leading to reduced impacts and costs.” NIDIS resources and programs include the U.S. Drought Portal; regional pilots for drought early warning information systems; working groups with stakeholders such as Engaging Preparedness Communities; drought monitoring and forecasting tools; and integrated research, education, and outreach activities to improve drought preparedness and awareness.

Managers affirmed the importance of NIDIS, noting that it provides “[a] needed federal role to improve drought preparedness”; “[a] centralized source of drought information”; “[an] authoritative, trusted resource”; “one-stop shopping”; and “[a] regional and national view of drought.” Managers also emphasized the need for ongoing support of NIDIS to fully develop and operationalize its benefits: “NIDIS has great potential to provide what decision makers need, but it needs time to develop trust and a user community.”

Pointing to the use and value of early warning information, managers described how NIDIS information could help them make decisions to reduce costs. As examples: “I'd love to know in advance whether it's going to be a wet or dry winter. Then [I] would know how much to release from reservoirs. [I] would also move feed from water-rich to water-poor areas, so cattle don't go without feed. Savings, could reduce millions in costs.” “If we had good early warning information, [we] could drill wells, plant different crops, get funding. I need it six months ahead of the time that discernible impacts show. Could reduce billions of dollars in costs.”

Managers believed NIDIS could avail and bring together existing state capabilities for mutual benefit. “States like to see what other states are doing.” “NIDIS needs to hook into states' resources.” “We've already pulled information from everywhere—people throughout the state really use our website.” Some managers also believed NIDIS could help close the gap between drought researchers and decision makers. “We [decision makers] need to find a way to get more involved in how they [researchers] are developing the kind of information they're developing”; the end result is “a lot more valuable.”

Another advantage of NIDIS, according to managers, is that it can help with “messaging to the public” and “promoting awareness and support for drought preparedness.” As they elaborated: “What frustrates me is that interest in drought planning and mitigation wanes so easily—the public memory is so short—a little wet weather, it goes away. But public attitude drives what we do, and funding”; “Benefits of [drought information]—it keeps drought in people's minds, and the media”; “Surprising how far-reaching drought is, how many people contact me”; “When it rains—it just delays when the next drought comes.”

CONCLUSIONS AND RECOMMENDATIONS.

Drought inflicts damages across each state in the West. Based on this survey, drought managers attest that they need better monitoring and forecast information to more effectively prepare for drought and reduce impacts. But “better” does not necessarily mean more. First, though nearly all states had drought plans and ample access to indicator data, virtually none used the indicators specified in their plans. States need guidance and methods to develop and select useful indicators, to test them, and to evaluate and improve their effectiveness. Second, in the quest to provide early warning information, the expertise and experience of drought managers should be tapped; it is important to understand the information they need to reduce impacts, and to develop it with them, which ultimately can promote its operational use and economic value. Third, state drought plans can link indicators with actions to reduce drought impacts. But plans without planning can become just paper. Plans need to be part of a broader and ongoing process so that early warning information can be tested and trusted, lessons from droughts can be incorporated and institutionalized, and preparedness can fulfill its potential.

ACKNOWLEDGMENTS

I thank the 19 state drought managers for valuable contributions to this study, and collaborators including Dan Cayan, Kelly Redmond, Ed Miles, Melissa Finucane, Brad Udall, Gregg Garfin, Dan White, Michael Hayes, James Verdin, and Roger Pulwarty. I also thank Kelly Redmond, Michael Hayes, Dan Cayan, Jeanine Jones, Amy Davis, and two referees for their very helpful reviews. This study received support from the Joint Institute for the Study of the Atmosphere and Ocean (JISAO) under NOAA Cooperative Agreement No. NA17RJ1232, Contribution No. 14795, and from the California-Nevada Applications Program (CNAP), National Oceanic and Atmospheric Administration grant NA11OAR4310150.

FOR FURTHER READING

FOR FURTHER READING
National Climatic Data Center
,
2013
:
Billion-Dollar Weather/Climate Disasters
.
[Available online at www.ncdc.noaa.gov/billions.]
National Integrated Drought Information System
,
2013
:
U.S. Drought Portal: “What is NIDIS?”
National Integrated Drought Information System
,
2006
:
National Integrated Drought Information System Act of 2006
.
Public Law 109-430, 15 U.S.C. 311; 15 U.S.C. 313d. [Available online at www.gpo.gov/fdsys/pkg/PLAW-109publ430/pdf/PLAW-109publ430.pdf.]
Steinemann
,
A.
, and
L.
Cavalcanti
,
2006
:
Developing multiple indicators and triggers for drought plans
.
J. Water Resour. Plan. Manage.
,
132
,
164
174
,
doi:10.1061/(ASCE)0733-9496(2006)132:3(164)
.
Steinemann
,
A.
,
M.
Hayes
, and
L.
Cavalcanti
,
2005
:
Drought indicators and triggers
.
Drought and Water Crises: Science, Technology, and Management Issues
,
D.
Wilhite
,
Ed.
,
CRC Press
,
71
92
.

Footnotes

1This article uses the term “state drought managers” to refer to individuals with responsibilities for statewide decisions regarding drought.

2The survey included telephone interviews with the designated drought manager in each state and analysis of state drought plans. The interview instrument contained 50 structured and semistructured questions. Interviews lasted approximately one hour each, and resulted collectively in more than 2,000 pieces of data and interview quotations, which were coded, analyzed, and validated, using standard qualitative research methods. Interviews were conducted during November 2011–January 2012. Managers were experienced, with a median of 20 years in their current or related position. The survey instrument can be obtained by contacting the author.

3Direct statements from individual managers are provided in quotation marks.

4These assessments were in response to the questions: “What would you estimate to be the costs of drought in a typical drought year?” and “If you had better early warning information, what percentage of those costs could be reduced?”

5Indicators are typically based on meteorological and hydrological variables, such as precipitation and streamflow, but can be based on any variable that influences drought, such as economics and regulations, or that relates to drought impacts, such as extent of fallowed land.

6Triggers should specify the indicator value or drought level, time period, spatial scale, and whether for drought progressing or receding (see Steinemann et al. 2005).

7To evaluate indicator effectiveness, while criteria may vary, a general concept is whether the indicator would provide decision-making value, such as early warning and sound guidance for reducing drought impacts.

8Among the 16 plans, the most common indicators were the PDSI, SPI, and SWSI (in 14, 12, and 8 plans, respectively).

9Percentiles can be calculated in different ways; albeit not all indicator data are amenable to percentile calculations.

10Eisenhower, Dwight D., from a speech to the National Defense Executive Reserve Conference in Washington, D.C. (14 November 1957)