Search Results
You are looking at 11 - 20 of 24 items for
- Author or Editor: Florian Pappenberger x
- Refine by Access: All Content x
Abstract
The use of probabilistic forecasts is necessary to take into account uncertainties and allow for optimal risk-based decisions in streamflow forecasting at monthly to seasonal lead times. Such probabilistic forecasts have long been used by practitioners in the operation of water reservoirs, in water allocation and management, and more recently in drought preparedness activities. Various studies assert the potential value of hydrometeorological forecasting efforts, but few investigate how these forecasts are used in the decision-making process. Role-playing games can help scientists, managers, and decision-makers understand the extremely complex process behind risk-based decisions. In this paper, we present an experiment focusing on the use of probabilistic forecasts to make decisions on reservoir outflows. The setup was a risk-based decision-making game, during which participants acted as water managers. Participants determined monthly reservoir releases based on a sequence of probabilistic inflow forecasts, reservoir volume objectives, and release constraints. After each decision, consequences were evaluated based on the actual inflow. The analysis of 162 game sheets collected after eight applications of the game illustrates the importance of leveraging not only the probabilistic information in the forecasts but also predictions for a range of lead times. Winning strategies tended to gradually empty the reservoir in the months before the peak inflow period to accommodate its volume and avoid overtopping. Twenty percent of the participants managed to do so and finished the management period without having exceeded the maximum reservoir capacity or violating downstream release constraints. The role-playing approach successfully created an open atmosphere to discuss the challenges of using probabilistic forecasts in sequential decision-making.
Abstract
The use of probabilistic forecasts is necessary to take into account uncertainties and allow for optimal risk-based decisions in streamflow forecasting at monthly to seasonal lead times. Such probabilistic forecasts have long been used by practitioners in the operation of water reservoirs, in water allocation and management, and more recently in drought preparedness activities. Various studies assert the potential value of hydrometeorological forecasting efforts, but few investigate how these forecasts are used in the decision-making process. Role-playing games can help scientists, managers, and decision-makers understand the extremely complex process behind risk-based decisions. In this paper, we present an experiment focusing on the use of probabilistic forecasts to make decisions on reservoir outflows. The setup was a risk-based decision-making game, during which participants acted as water managers. Participants determined monthly reservoir releases based on a sequence of probabilistic inflow forecasts, reservoir volume objectives, and release constraints. After each decision, consequences were evaluated based on the actual inflow. The analysis of 162 game sheets collected after eight applications of the game illustrates the importance of leveraging not only the probabilistic information in the forecasts but also predictions for a range of lead times. Winning strategies tended to gradually empty the reservoir in the months before the peak inflow period to accommodate its volume and avoid overtopping. Twenty percent of the participants managed to do so and finished the management period without having exceeded the maximum reservoir capacity or violating downstream release constraints. The role-playing approach successfully created an open atmosphere to discuss the challenges of using probabilistic forecasts in sequential decision-making.
Abstract
A global fire danger rating system driven by atmospheric model forcing has been developed with the aim of providing early warning information to civil protection authorities. The daily predictions of fire danger conditions are based on the U.S. Forest Service National Fire-Danger Rating System (NFDRS), the Canadian Forest Service Fire Weather Index Rating System (FWI), and the Australian McArthur (Mark 5) rating systems. Weather forcings are provided in real time by the European Centre for Medium-Range Weather Forecasts forecasting system at 25-km resolution. The global system’s potential predictability is assessed using reanalysis fields as weather forcings. The Global Fire Emissions Database (GFED4) provides 11 yr of observed burned areas from satellite measurements and is used as a validation dataset. The fire indices implemented are good predictors to highlight dangerous conditions. High values are correlated with observed fire, and low values correspond to nonobserved events. A more quantitative skill evaluation was performed using the extremal dependency index, which is a skill score specifically designed for rare events. It revealed that the three indices were more skillful than the random forecast to detect large fires on a global scale. The performance peaks in the boreal forests, the Mediterranean region, the Amazon rain forests, and Southeast Asia. The skill scores were then aggregated at the country level to reveal which nations could potentially benefit from the system information to aid decision-making and fire control support. Overall it was found that fire danger modeling based on weather forecasts can provide reasonable predictability over large parts of the global landmass.
Abstract
A global fire danger rating system driven by atmospheric model forcing has been developed with the aim of providing early warning information to civil protection authorities. The daily predictions of fire danger conditions are based on the U.S. Forest Service National Fire-Danger Rating System (NFDRS), the Canadian Forest Service Fire Weather Index Rating System (FWI), and the Australian McArthur (Mark 5) rating systems. Weather forcings are provided in real time by the European Centre for Medium-Range Weather Forecasts forecasting system at 25-km resolution. The global system’s potential predictability is assessed using reanalysis fields as weather forcings. The Global Fire Emissions Database (GFED4) provides 11 yr of observed burned areas from satellite measurements and is used as a validation dataset. The fire indices implemented are good predictors to highlight dangerous conditions. High values are correlated with observed fire, and low values correspond to nonobserved events. A more quantitative skill evaluation was performed using the extremal dependency index, which is a skill score specifically designed for rare events. It revealed that the three indices were more skillful than the random forecast to detect large fires on a global scale. The performance peaks in the boreal forests, the Mediterranean region, the Amazon rain forests, and Southeast Asia. The skill scores were then aggregated at the country level to reveal which nations could potentially benefit from the system information to aid decision-making and fire control support. Overall it was found that fire danger modeling based on weather forecasts can provide reasonable predictability over large parts of the global landmass.
Abstract
Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.
Abstract
Flood simulation models and hazard maps are only as good as the underlying data against which they are calibrated and tested. However, extreme flood events are by definition rare, so the observational data of flood inundation extent are limited in both quality and quantity. The relative importance of these observational uncertainties has increased now that computing power and accurate lidar scans make it possible to run high-resolution 2D models to simulate floods in urban areas. However, the value of these simulations is limited by the uncertainty in the true extent of the flood. This paper addresses that challenge by analyzing a point dataset of maximum water extent from a flood event on the River Eden at Carlisle, United Kingdom, in January 2005. The observation dataset is based on a collection of wrack and water marks from two postevent surveys. A smoothing algorithm for identifying, quantifying, and reducing localized inconsistencies in the dataset is proposed and evaluated showing positive results. The proposed smoothing algorithm can be applied in order to improve flood inundation modeling assessment and the determination of risk zones on the floodplain.
Abstract
A 10-day globally applicable flood prediction scheme was evaluated using the Ohio River basin as a test site for the period 2003–07. The Variable Infiltration Capacity (VIC) hydrology model was initialized with the European Centre for Medium-Range Weather Forecasts (ECMWF) analysis temperatures and winds, and Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) precipitation up to the day of forecast. In forecast mode, the VIC model was then forced with a calibrated and statistically downscaled ECMWF Ensemble Prediction System (EPS) 10-day ensemble forecast. A parallel setup was used where ECMWF EPS forecasts were interpolated to the spatial scale of the hydrology model. Each set of forecasts was extended by 5 days using monthly mean climatological variables and zero precipitation in order to account for the effects of the initial conditions. The 15-day spatially distributed ensemble runoff forecasts were then routed to four locations in the basin, each with different drainage areas. Surrogates for observed daily runoff and flow were provided by the reference run, specifically VIC simulation forced with ECMWF analysis fields and TMPA precipitation fields. The hydrologic prediction scheme using the calibrated and downscaled ECMWF EPS forecasts was shown to be more accurate and reliable than interpolated forecasts for both daily distributed runoff forecasts and daily flow forecasts. The initial and antecedent conditions dominated the flow forecasts for lead times shorter than the time of concentration depending on the flow forecast amounts and the drainage area sizes. The flood prediction scheme had useful skill for the 10 following days at all sites.
Abstract
A 10-day globally applicable flood prediction scheme was evaluated using the Ohio River basin as a test site for the period 2003–07. The Variable Infiltration Capacity (VIC) hydrology model was initialized with the European Centre for Medium-Range Weather Forecasts (ECMWF) analysis temperatures and winds, and Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) precipitation up to the day of forecast. In forecast mode, the VIC model was then forced with a calibrated and statistically downscaled ECMWF Ensemble Prediction System (EPS) 10-day ensemble forecast. A parallel setup was used where ECMWF EPS forecasts were interpolated to the spatial scale of the hydrology model. Each set of forecasts was extended by 5 days using monthly mean climatological variables and zero precipitation in order to account for the effects of the initial conditions. The 15-day spatially distributed ensemble runoff forecasts were then routed to four locations in the basin, each with different drainage areas. Surrogates for observed daily runoff and flow were provided by the reference run, specifically VIC simulation forced with ECMWF analysis fields and TMPA precipitation fields. The hydrologic prediction scheme using the calibrated and downscaled ECMWF EPS forecasts was shown to be more accurate and reliable than interpolated forecasts for both daily distributed runoff forecasts and daily flow forecasts. The initial and antecedent conditions dominated the flow forecasts for lead times shorter than the time of concentration depending on the flow forecast amounts and the drainage area sizes. The flood prediction scheme had useful skill for the 10 following days at all sites.
Abstract
In the last decade operational probabilistic ensemble flood forecasts have become common in supporting decision-making processes leading to risk reduction. Ensemble forecasts can assess uncertainty, but they are limited to the uncertainty in a specific modeling system. Many of the current operational flood prediction systems use a multimodel approach to better represent the uncertainty arising from insufficient model structure. This study presents a multimodel approach to building a global flood prediction system using multiple atmospheric reanalysis datasets for river initial conditions and multiple TIGGE forcing inputs to the ECMWF land surface model. A sensitivity study is carried out to clarify the effect of using archive ensemble meteorological predictions and uncoupled land surface models. The probabilistic discharge forecasts derived from the different atmospheric models are compared with those from the multimodel combination. The potential for further improving forecast skill by bias correction and Bayesian model averaging is examined. The results show that the impact of the different TIGGE input variables in the HTESSEL/Catchment-Based Macroscale Floodplain model (CaMa-Flood) setup is rather limited other than for precipitation. This provides a sufficient basis for evaluation of the multimodel discharge predictions. The results also highlight that the three applied reanalysis datasets have different error characteristics that allow for large potential gains with a multimodel combination. It is shown that large improvements to the forecast performance for all models can be achieved through appropriate statistical postprocessing (bias and spread correction). A simple multimodel combination generally improves the forecasts, while a more advanced combination using Bayesian model averaging provides further benefits.
Abstract
In the last decade operational probabilistic ensemble flood forecasts have become common in supporting decision-making processes leading to risk reduction. Ensemble forecasts can assess uncertainty, but they are limited to the uncertainty in a specific modeling system. Many of the current operational flood prediction systems use a multimodel approach to better represent the uncertainty arising from insufficient model structure. This study presents a multimodel approach to building a global flood prediction system using multiple atmospheric reanalysis datasets for river initial conditions and multiple TIGGE forcing inputs to the ECMWF land surface model. A sensitivity study is carried out to clarify the effect of using archive ensemble meteorological predictions and uncoupled land surface models. The probabilistic discharge forecasts derived from the different atmospheric models are compared with those from the multimodel combination. The potential for further improving forecast skill by bias correction and Bayesian model averaging is examined. The results show that the impact of the different TIGGE input variables in the HTESSEL/Catchment-Based Macroscale Floodplain model (CaMa-Flood) setup is rather limited other than for precipitation. This provides a sufficient basis for evaluation of the multimodel discharge predictions. The results also highlight that the three applied reanalysis datasets have different error characteristics that allow for large potential gains with a multimodel combination. It is shown that large improvements to the forecast performance for all models can be achieved through appropriate statistical postprocessing (bias and spread correction). A simple multimodel combination generally improves the forecasts, while a more advanced combination using Bayesian model averaging provides further benefits.
Abstract
The International Grand Global Ensemble (TIGGE) was a major component of The Observing System Research and Predictability Experiment (THORPEX) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics.
The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a multimodel grand ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed.
TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world and are a focus of multimodel ensemble research. Their extratropical transition also has a major impact on the skill of midlatitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extratropical cyclones and storm tracks.
Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles.
Finally, the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Abstract
The International Grand Global Ensemble (TIGGE) was a major component of The Observing System Research and Predictability Experiment (THORPEX) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics.
The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a multimodel grand ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed.
TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world and are a focus of multimodel ensemble research. Their extratropical transition also has a major impact on the skill of midlatitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extratropical cyclones and storm tracks.
Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles.
Finally, the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Abstract
Skillful and timely streamflow forecasts are critically important to water managers and emergency protection services. To provide these forecasts, hydrologists must predict the behavior of complex coupled human–natural systems using incomplete and uncertain information and imperfect models. Moreover, operational predictions often integrate anecdotal information and unmodeled factors. Forecasting agencies face four key challenges: 1) making the most of available data, 2) making accurate predictions using models, 3) turning hydrometeorological forecasts into effective warnings, and 4) administering an operational service. Each challenge presents a variety of research opportunities, including the development of automated quality-control algorithms for the myriad of data used in operational streamflow forecasts, data assimilation, and ensemble forecasting techniques that allow for forecaster input, methods for using human-generated weather forecasts quantitatively, and quantification of human interference in the hydrologic cycle. Furthermore, much can be done to improve the communication of probabilistic forecasts and to design a forecasting paradigm that effectively combines increasingly sophisticated forecasting technology with subjective forecaster expertise. These areas are described in detail to share a real-world perspective and focus for ongoing research endeavors.
Abstract
Skillful and timely streamflow forecasts are critically important to water managers and emergency protection services. To provide these forecasts, hydrologists must predict the behavior of complex coupled human–natural systems using incomplete and uncertain information and imperfect models. Moreover, operational predictions often integrate anecdotal information and unmodeled factors. Forecasting agencies face four key challenges: 1) making the most of available data, 2) making accurate predictions using models, 3) turning hydrometeorological forecasts into effective warnings, and 4) administering an operational service. Each challenge presents a variety of research opportunities, including the development of automated quality-control algorithms for the myriad of data used in operational streamflow forecasts, data assimilation, and ensemble forecasting techniques that allow for forecaster input, methods for using human-generated weather forecasts quantitatively, and quantification of human interference in the hydrologic cycle. Furthermore, much can be done to improve the communication of probabilistic forecasts and to design a forecasting paradigm that effectively combines increasingly sophisticated forecasting technology with subjective forecaster expertise. These areas are described in detail to share a real-world perspective and focus for ongoing research endeavors.
Abstract
A key aim of observational campaigns is to sample atmosphere–ocean phenomena to improve understanding of these phenomena, and in turn, numerical weather prediction. In early 2018 and 2019, the Atmospheric River Reconnaissance (AR Recon) campaign released dropsondes and radiosondes into atmospheric rivers (ARs) over the northeast Pacific Ocean to collect unique observations of temperature, winds, and moisture in ARs. These narrow regions of water vapor transport in the atmosphere—like rivers in the sky—can be associated with extreme precipitation and flooding events in the midlatitudes. This study uses the dropsonde observations collected during the AR Recon campaign and the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) to evaluate forecasts of ARs. Results show that ECMWF IFS forecasts 1) were colder than observations by up to 0.6 K throughout the troposphere; 2) have a dry bias in the lower troposphere, which along with weaker winds below 950 hPa, resulted in weaker horizontal water vapor fluxes in the 950–1000-hPa layer; and 3) exhibit an underdispersiveness in the water vapor flux that largely arises from model representativeness errors associated with dropsondes. Four U.S. West Coast radiosonde sites confirm the IFS cold bias throughout winter. These issues are likely to affect the model’s hydrological cycle and hence precipitation forecasts.
Abstract
A key aim of observational campaigns is to sample atmosphere–ocean phenomena to improve understanding of these phenomena, and in turn, numerical weather prediction. In early 2018 and 2019, the Atmospheric River Reconnaissance (AR Recon) campaign released dropsondes and radiosondes into atmospheric rivers (ARs) over the northeast Pacific Ocean to collect unique observations of temperature, winds, and moisture in ARs. These narrow regions of water vapor transport in the atmosphere—like rivers in the sky—can be associated with extreme precipitation and flooding events in the midlatitudes. This study uses the dropsonde observations collected during the AR Recon campaign and the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) to evaluate forecasts of ARs. Results show that ECMWF IFS forecasts 1) were colder than observations by up to 0.6 K throughout the troposphere; 2) have a dry bias in the lower troposphere, which along with weaker winds below 950 hPa, resulted in weaker horizontal water vapor fluxes in the 950–1000-hPa layer; and 3) exhibit an underdispersiveness in the water vapor flux that largely arises from model representativeness errors associated with dropsondes. Four U.S. West Coast radiosonde sites confirm the IFS cold bias throughout winter. These issues are likely to affect the model’s hydrological cycle and hence precipitation forecasts.
Abstract
Atmospheric River Reconnaissance has held field campaigns during cool seasons since 2016. These campaigns have provided thousands of dropsonde data profiles, which are assimilated into multiple global operational numerical weather prediction models. Data denial experiments, conducted by running a parallel set of forecasts that exclude the dropsonde information, allow testing of the impact of the dropsonde data on model analyses and the subsequent forecasts. Here, we investigate the differences in skill between the control forecasts (with dropsonde data assimilated) and denial forecasts (without dropsonde data assimilated) in terms of both precipitation and integrated vapor transport (IVT) at multiple thresholds. The differences are considered in the times and locations where there is a reasonable expectation of influence of an intensive observation period (IOP). Results for 2019 and 2020 from both the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Centers for Environmental Prediction (NCEP) global model show improvements with the added information from the dropsondes. In particular, significant improvements in the control forecast IVT generally occur in both models, especially at higher values. Significant improvements in the control forecast precipitation also generally occur in both models, but the improvements vary depending on the lead time and metrics used.
Significance Statement
Atmospheric River Reconnaissance is a program that uses targeted aircraft flights over the northeast Pacific to take measurements of meteorological fields. These data are then ingested into global weather models with the intent of improving the initial conditions and resulting forecasts along the U.S. West Coast. The impacts of these observations on two global numerical weather models were investigated to determine their influence on the forecasts. The integrated vapor transport, a measure of both wind and humidity, saw significant improvements in both models with the additional observations. Precipitation forecasts were also improved, but with differing results between the two models.
Abstract
Atmospheric River Reconnaissance has held field campaigns during cool seasons since 2016. These campaigns have provided thousands of dropsonde data profiles, which are assimilated into multiple global operational numerical weather prediction models. Data denial experiments, conducted by running a parallel set of forecasts that exclude the dropsonde information, allow testing of the impact of the dropsonde data on model analyses and the subsequent forecasts. Here, we investigate the differences in skill between the control forecasts (with dropsonde data assimilated) and denial forecasts (without dropsonde data assimilated) in terms of both precipitation and integrated vapor transport (IVT) at multiple thresholds. The differences are considered in the times and locations where there is a reasonable expectation of influence of an intensive observation period (IOP). Results for 2019 and 2020 from both the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Centers for Environmental Prediction (NCEP) global model show improvements with the added information from the dropsondes. In particular, significant improvements in the control forecast IVT generally occur in both models, especially at higher values. Significant improvements in the control forecast precipitation also generally occur in both models, but the improvements vary depending on the lead time and metrics used.
Significance Statement
Atmospheric River Reconnaissance is a program that uses targeted aircraft flights over the northeast Pacific to take measurements of meteorological fields. These data are then ingested into global weather models with the intent of improving the initial conditions and resulting forecasts along the U.S. West Coast. The impacts of these observations on two global numerical weather models were investigated to determine their influence on the forecasts. The integrated vapor transport, a measure of both wind and humidity, saw significant improvements in both models with the additional observations. Precipitation forecasts were also improved, but with differing results between the two models.