Search Results

You are looking at 11 - 20 of 34 items for

  • Author or Editor: Andrew W. Wood x
  • Refine by Access: All Content x
Clear All Modify Search
Konstantinos M. Andreadis
,
Elizabeth A. Clark
,
Andrew W. Wood
,
Alan F. Hamlet
, and
Dennis P. Lettenmaier

Abstract

Droughts can be characterized by their severity, frequency and duration, and areal extent. Depth–area–duration analysis, widely used to characterize precipitation extremes, provides a basis for the evaluation of drought severity when storm depth is replaced by an appropriate measure of drought severity. Gridded precipitation and temperature data were used to force a physically based macroscale hydrologic model at 1/2° spatial resolution over the continental United States, and construct a drought history from 1920 to 2003 based on the model-simulated soil moisture and runoff. A clustering algorithm was used to identify individual drought events and their spatial extent from monthly summaries of the simulated data. A series of severity–area–duration (SAD) curves were constructed to relate the area of each drought to its severity. An envelope of the most severe drought events in terms of their SAD characteristics was then constructed. The results show that (a) the droughts of the 1930s and 1950s were the most severe of the twentieth century for large areas; (b) the early 2000s drought in the western United States is among the most severe in the period of record, especially for small areas and short durations; (c) the most severe agricultural droughts were also among the most severe hydrologic droughts, however, the early 2000s western U.S. drought occupies a larger portion of the hydrologic drought envelope curve than does its agricultural companion; and (d) runoff tends to recover in response to precipitation more quickly than soil moisture, so the severity of hydrologic drought during the 1930s and 1950s was dampened by short wet spells, while the severity of the early 2000s drought remained high because of the relative absence of these short-term phenomena.

Full access
Louise Arnal
,
Andrew W. Wood
,
Elisabeth Stephens
,
Hannah L. Cloke
, and
Florian Pappenberger

Abstract

Seasonal streamflow prediction skill can derive from catchment initial hydrological conditions (IHCs) and from the future seasonal climate forecasts (SCFs) used to produce the hydrological forecasts. Although much effort has gone into producing state-of-the-art seasonal streamflow forecasts from improving IHCs and SCFs, these developments are expensive and time consuming and the forecasting skill is still limited in most parts of the world. Hence, sensitivity analyses are crucial to funnel the resources into useful modeling and forecasting developments. It is in this context that a sensitivity analysis technique, the variational ensemble streamflow prediction assessment (VESPA) approach, was recently introduced. VESPA can be used to quantify the expected improvements in seasonal streamflow forecast skill as a result of realistic improvements in its predictability sources (i.e., the IHCs and the SCFs)—termed “skill elasticity”—and to indicate where efforts should be targeted. The VESPA approach is, however, computationally expensive, relying on multiple hindcasts having varying levels of skill in IHCs and SCFs. This paper presents two approximations of the approach that are computationally inexpensive alternatives. These new methods were tested against the original VESPA results using 30 years of ensemble hindcasts for 18 catchments of the contiguous United States. The results suggest that one of the methods, end point blending, is an effective alternative for estimating the forecast skill elasticities yielded by the VESPA approach. The results also highlight the importance of the choice of verification score for a goal-oriented sensitivity analysis.

Full access
Andrew W. Wood
,
Tom Hopson
,
Andy Newman
,
Levi Brekke
,
Jeff Arnold
, and
Martyn Clark

Abstract

Water resources management decisions commonly depend on monthly to seasonal streamflow forecasts, among other kinds of information. The skill of such predictions derives from the ability to estimate a watershed’s initial moisture and energy conditions and to forecast future weather and climate. These sources of predictability are investigated in an idealized (i.e., perfect model) experiment using calibrated hydrologic simulation models for 424 watersheds that span the continental United States. Prior work in this area also followed an ensemble-based strategy for attributing streamflow forecast uncertainty, but focused only on two end points representing zero and perfect information about future forcings and initial conditions. This study extends the prior approach to characterize the influence of varying levels of uncertainty in each area on streamflow prediction uncertainty. The sensitivities enable the calculation of flow forecast skill elasticities (i.e., derivatives) relative to skill in either predictability source, which are used to characterize the regional, seasonal, and predictand variations in flow forecast skill dependencies. The resulting analysis provides insights on the relative benefits of investments toward improving watershed monitoring (through modeling and measurement) versus improved climate forecasting. Among other key findings, the results suggest that climate forecast skill improvements can be amplified in streamflow prediction skill, which means that climate forecasts may have greater benefit for monthly-to-seasonal flow forecasting than is apparent from climate forecast skill considerations alone. The results also underscore the importance of advancing hydrologic modeling, expanding watershed observations, and leveraging data assimilation, all of which help capture initial hydrologic conditions that are often the dominant influence on hydrologic predictions.

Full access
Louise Crochemore
,
Maria-Helena Ramos
,
Florian Pappenberger
,
Schalk Jan van Andel
, and
Andrew W. Wood

Abstract

The use of probabilistic forecasts is necessary to take into account uncertainties and allow for optimal risk-based decisions in streamflow forecasting at monthly to seasonal lead times. Such probabilistic forecasts have long been used by practitioners in the operation of water reservoirs, in water allocation and management, and more recently in drought preparedness activities. Various studies assert the potential value of hydrometeorological forecasting efforts, but few investigate how these forecasts are used in the decision-making process. Role-playing games can help scientists, managers, and decision-makers understand the extremely complex process behind risk-based decisions. In this paper, we present an experiment focusing on the use of probabilistic forecasts to make decisions on reservoir outflows. The setup was a risk-based decision-making game, during which participants acted as water managers. Participants determined monthly reservoir releases based on a sequence of probabilistic inflow forecasts, reservoir volume objectives, and release constraints. After each decision, consequences were evaluated based on the actual inflow. The analysis of 162 game sheets collected after eight applications of the game illustrates the importance of leveraging not only the probabilistic information in the forecasts but also predictions for a range of lead times. Winning strategies tended to gradually empty the reservoir in the months before the peak inflow period to accommodate its volume and avoid overtopping. Twenty percent of the participants managed to do so and finished the management period without having exceeded the maximum reservoir capacity or violating downstream release constraints. The role-playing approach successfully created an open atmosphere to discuss the challenges of using probabilistic forecasts in sequential decision-making.

Full access
Patrick T. W. Bunn
,
Andrew W. Wood
,
Andrew J. Newman
,
Hsin-I Chang
,
Christopher L. Castro
,
Martyn P. Clark
, and
Jeffrey R. Arnold

Abstract

Surface meteorological analyses serve a wide range of research and applications, including forcing inputs for hydrological and ecological models, climate analysis, and resource and emergency management. Quantifying uncertainty in such analyses would extend their utility for probabilistic hydrologic prediction and climate risk applications. With this motivation, we enhance and evaluate an approach for generating ensemble analyses of precipitation and temperature through the fusion of station observations, terrain information, and numerical weather prediction simulations of surface climate fields. In particular, we expand a spatial regression in which static terrain attributes serve as predictors for spatially distributed 1/16° daily surface precipitation and temperature by including forecast outputs from the High-Resolution Rapid Refresh (HRRR) numerical weather prediction model as additional predictors. We demonstrate the approach for a case study domain of California, focusing on the meteorological conditions leading to the 2017 flood and spillway failure event at Lake Oroville. The approach extends the spatial regression capability of the Gridded Meteorological Ensemble Tool (GMET) and also adds cross validation to the uncertainty estimation component, enabling the use of predictive rather than calibration uncertainty. In evaluation against out-of-sample station observations, the HRRR-based predictors alone are found to be skillful for the study setting, leading to overall improvements in the enhanced GMET meteorological analyses. The methodology and associated tool represent a promising method for generating meteorological surface analyses for both research-oriented and operational applications, as well as a general strategy for merging in situ and gridded observations.

Full access

Prospects for Advancing Drought Understanding, Monitoring, and Prediction

Eric F. Wood
,
Siegfried D. Schubert
,
Andrew W. Wood
,
Christa D. Peters-Lidard
,
Kingtse C. Mo
,
Annarita Mariotti
, and
Roger S. Pulwarty

Abstract

This paper summarizes and synthesizes the research carried out under the NOAA Drought Task Force (DTF) and submitted in this special collection. The DTF is organized and supported by NOAA’s Climate Program Office with the National Integrated Drought Information System (NIDIS) and involves scientists from across NOAA, academia, and other agencies. The synthesis includes an assessment of successes and remaining challenges in monitoring and prediction capabilities, as well as a perspective of the current understanding of North American drought and key research gaps. Results from the DTF papers indicate that key successes for drought monitoring include the application of modern land surface hydrological models that can be used for objective drought analysis, including extended retrospective forcing datasets to support hydrologic reanalyses, and the expansion of near-real-time satellite-based monitoring and analyses, particularly those describing vegetation and evapotranspiration. In the area of drought prediction, successes highlighted in the papers include the development of the North American Multimodel Ensemble (NMME) suite of seasonal model forecasts, an established basis for the importance of La Niña in drought events over the southern Great Plains, and an appreciation of the role of internal atmospheric variability related to drought events. Despite such progress, there are still important limitations in our ability to predict various aspects of drought, including onset, duration, severity, and recovery. Critical challenges include (i) the development of objective, science-based integration approaches for merging multiple information sources; (ii) long, consistent hydrometeorological records to better characterize drought; and (iii) extending skillful precipitation forecasts beyond a 1-month lead time.

Full access
Tongtiegang Zhao
,
James C. Bennett
,
Q. J. Wang
,
Andrew Schepen
,
Andrew W. Wood
,
David E. Robertson
, and
Maria-Helena Ramos

Abstract

GCMs are used by many national weather services to produce seasonal outlooks of atmospheric and oceanic conditions and fluxes. Postprocessing is often a necessary step before GCM forecasts can be applied in practice. Quantile mapping (QM) is rapidly becoming the method of choice by operational agencies to postprocess raw GCM outputs. The authors investigate whether QM is appropriate for this task. Ensemble forecast postprocessing methods should aim to 1) correct bias, 2) ensure forecasts are reliable in ensemble spread, and 3) guarantee forecasts are at least as skillful as climatology, a property called “coherence.” This study evaluates the effectiveness of QM in achieving these aims by applying it to precipitation forecasts from the POAMA model. It is shown that while QM is highly effective in correcting bias, it cannot ensure reliability in forecast ensemble spread or guarantee coherence. This is because QM ignores the correlation between raw ensemble forecasts and observations. When raw forecasts are not significantly positively correlated with observations, QM tends to produce negatively skillful forecasts. Even when there is significant positive correlation, QM cannot ensure reliability and coherence for postprocessed forecasts. Therefore, QM is not a fully satisfactory method for postprocessing forecasts where the issues of bias, reliability, and coherence pre-exist. Alternative postprocessing methods based on ensemble model output statistics (EMOS) are available that achieve not only unbiased but also reliable and coherent forecasts. This is shown with one such alternative, the Bayesian joint probability modeling approach.

Full access
Ashley E. Van Beusekom
,
Lauren E. Hay
,
Andrew R. Bennett
,
Young-Don Choi
,
Martyn P. Clark
,
Jon L. Goodall
,
Zhiyu Li
,
Iman Maghami
,
Bart Nijssen
, and
Andrew W. Wood

Abstract

Surface meteorological analyses are an essential input (termed “forcing”) for hydrologic modeling. This study investigated the sensitivity of different hydrologic model configurations to temporal variations of seven forcing variables (precipitation rate, air temperature, longwave radiation, specific humidity, shortwave radiation, wind speed, and air pressure). Specifically, the effects of temporally aggregating hourly forcings to hourly daily average forcings were examined. The analysis was based on 14 hydrological outputs from the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model for the 671 Catchment Attributes and Meteorology for Large-Sample Studies (CAMELS) basins across the contiguous United States (CONUS). Results demonstrated that the hydrologic model sensitivity to temporally aggregating the forcing inputs varies across model output variables and model locations. We used Latin hypercube sampling to sample model parameters from eight combinations of three influential model physics choices (three model decisions with two options for each decision, i.e., eight model configurations). Results showed that the choice of model physics can change the relative influence of forcing on model outputs and the forcing importance may not be dependent on the parameter space. This allows for model output sensitivity to forcing aggregation to be tested prior to parameter calibration. More generally, this work provides a comprehensive analysis of the dependence of modeled outcomes on input forcing behavior, providing insight into the regional variability of forcing variable dominance on modeled outputs across CONUS.

Full access
William Ryan Currier
,
Andrew W. Wood
,
Naoki Mizukami
,
Bart Nijssen
,
Joseph J. Hamman
, and
Ethan D. Gutmann

Abstract

Vegetation parameters for the Variable Infiltration Capacity (VIC) hydrologic model were recently updated using observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). Previous work showed that these MODIS-based parameters improved VIC evapotranspiration simulations when compared to eddy covariance observations. Due to the importance of evapotranspiration within the Colorado River basin, this study provided a basin-by-basin calibration of VIC soil parameters with updated MODIS-based vegetation parameters to improve streamflow simulations. Interestingly, while both configurations had similar historic streamflow performance, end-of-century hydrologic projections, driven by 29 downscaled global climate models under the RCP8.5 emissions scenario, differed between the two configurations. The calibrated MODIS-based configuration had an ensemble mean that simulated little change in end-of-century annual streamflow volume (+0.4%) at Lees Ferry, Arizona, relative to the historical period (1960–2005). In contrast, the previous VIC configuration, which is used to inform decisions about future water resources in the Colorado River basin, projected an 11.7% decrease in annual streamflow. Both VIC configurations simulated similar amounts of evapotranspiration in the historical period. However, the MODIS-based VIC configuration did not show as much of an increase in evapotranspiration by the end of the century, primarily within the upper basin’s forested areas. Differences in evapotranspiration projections were the result of the MODIS-based vegetation parameters having lower leaf area index values and less forested area compared to previous vegetation estimates used in recent Colorado River basin hydrologic projections. These results highlight the need to accurately characterize vegetation and better constrain climate sensitivities in hydrologic models.

Significance Statement

Understanding systemic changes in annual Colorado River basin flows is critical for managing long-term reservoir levels. Single-digit percentage decreases have the potential to degrade the regions’ water supply, hydropower generation, and environmental concerns. Hydrology projections under climate change have largely been based on simulations from the Variable Infiltration Capacity model. Updating the model’s vegetation representation based on updated satellite information highlighted the sensitivity of the hydrologic projections to the models’ vegetation representation primarily within forested areas. This updated model did not increase in evapotranspiration by the end of the century as much as previous simulations. This increased the mean and ensemble spread of the projected streamflow changes, emphasizing the need to properly characterize the hydrologic model’s vegetation parameters and better constrain model climate sensitivity.

Open access
Peter Black
,
Lee Harrison
,
Mark Beaubien
,
Robert Bluth
,
Roy Woods
,
Andrew Penny
,
Robert W. Smith
, and
James D. Doyle

Abstract

The High-Definition Sounding System (HDSS) is an automated system deploying the expendable digital dropsonde (XDD) designed to measure wind and pressure–temperature–humidity (PTH) profiles, and skin sea surface temperature (SST) within and around tropical cyclones (TCs) and other high-impact weather events needing high sampling density. Three experiments were conducted to validate the XDD.

On two successive days off the California coast, 10 XDDs and 14 Vaisala RD-94s were deployed from the navy’s Center for Interdisciplinary Remotely-Piloted Aircraft Studies (CIRPAS) Twin Otter aircraft over offshore buoys. The Twin Otter made spiral descents from 4 km to 60 m at the same descent rate as the sondes. Differences between successive XDD and RD-94 profiles due to true meteorological variability were on the same order as the profile differences between the spirals, XDDs, and RD-94s. XDD SST measured via infrared microradiometer, referred to as infrared skin SST (SSTir), and surface wind measurements were within 0.5°C and 1.5 m s−1, respectively, of buoy and Twin Otter values.

A NASA DC-8 flight launched six XDDs from 12 km between ex-TC Cosme and the Baja California coast. Repeatability was shown with good agreement between features in successive profiles. XDD SSTir measurements from 18° to 28°C and surface winds agreed well with drifting buoy- and satellite-derived estimates.

Excellent agreement was found between PTH and wind profiles measured by XDDs deployed from a NASA WB-57 at 18-km altitude offshore from the Texas coast and NWS radiosonde profiles from Brownsville and Corpus Christi, Texas. Successful XDD profiles were obtained in the clear and within precipitation over an offshore squall line.

Full access