Browse

You are looking at 1 - 10 of 2,224 items for :

  • Journal of Hydrometeorology x
  • User-accessible content x
Clear All
Mengye Chen, Zhi Li, Shang Gao, Xiangyu Luo, Oliver E. J. Wing, Xinyi Shen, Jonathan J. Gourley, Randall L. Kolar, and Yang Hong

Abstract

Because climate change will increase the frequency and intensity of precipitation extremes and coastal flooding, there is a clear need for an integrated hydrology and hydraulic system that has the ability to model the hydrologic conditions over a long period and the flow dynamic representations of when and where the extreme hydrometeorological events occur. This system coupling provides comprehensive information (flood wave, inundation extents, and depths) about coastal flood events for emergency management and risk minimization. This study provides an integrated hydrologic and hydraulic coupled modeling system that is based on the Coupled Routing and Excessive Storage (CREST) model and the Australia National University-Geophysics Australia (ANUGA) model to simulate flood. Forced by the near-real-time Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimates, this integrated modeling system was applied during the 2017 Hurricane Harvey event to simulate the streamflow, the flood extent, and the inundation depth. The results were compared with postevent high-water-mark survey data and its interpolated flood extent by the U.S. Geological Survey and the Federal Emergency Management Agency flood insurance claims, as well as a satellite-based flood map, the National Water Model (NWM), and the Fathom (LISFLOOD-FP) model simulated flood map. The proposed hydrologic and hydraulic model simulation indicated that it could capture 87% of all flood insurance claims within the study area, and the overall error of water depth was 0.91 m, which is comparable to the mainstream operational flood models (NWM and Fathom).

Open access
Andrea Camplani, Daniele Casella, Paolo Sanò, and Giulia Panegrossi

Abstract

This paper describes a new Passive Microwave Empirical Cold Surface Classification Algorithm (PESCA) developed for snow-cover detection and characterization by using passive microwave satellite measurements. The main goal of PESCA is to support the retrieval of falling snow, since several studies have highlighted the influence of snow-cover radiative properties on the falling-snow passive microwave signature. The developed method is based on the exploitation of the lower-frequency channels (<90 GHz), common to most microwave radiometers. The method applied to the conically scanning Global Precipitation Measurement (GPM) Microwave Imager (GMI) and the cross-track-scanning Advanced Technology Microwave Sounder (ATMS) is described in this paper. PESCA is based on a decision tree developed using an empirical method and verified using the AutoSnow product built from satellite measurements. The algorithm performance appears to be robust both for sensors in dry conditions (total precipitable water < 10 mm) and for mean surface elevation < 2500 m, independent of the cloud cover. The algorithm shows very good performance for cold temperatures (2-m temperature below 270 K) with a rapid decrease of the detection capabilities between 270 and 280 K, where 280 K is assumed as the maximum temperature limit for PESCA (overall detection statistics: probability of detection is 0.98 for ATMS and 0.92 for GMI, false alarm ratio is 0.01 for ATMS and 0.08 for GMI, and Heidke skill score is 0.72 for ATMS and 0.69 for GMI). Some inconsistencies found between the snow categories identified with the two radiometers are related to their different viewing geometries, spatial resolution, and temporal sampling. The spectral signatures of the different snow classes also appear to be different at high frequency (>90 GHz), indicating potential impact for snowfall retrieval. This method can be applied to other conically scanning and cross-track-scanning radiometers, including the future operational EUMETSAT Polar System Second Generation (EPS-SG) mission microwave radiometers.

Open access
Nergui Nanding, Huan Wu, Jing Tao, Viviana Maggioni, Hylke E. Beck, Naijun Zhou, Maoyi Huang, and Zhijun Huang

Abstract

This study characterizes precipitation error propagation through a distributed hydrological model based on the river basins across the Contiguous United States (CONUS), to better understand the relationship between errors in precipitation inputs and simulated discharge (i.e., P-Q error relationship). The NLDAS-2 precipitation and its simulated discharge are used as the reference to compare with TMPA-3B42 V7, TMPA-3B42RT V7, StageIV, CPC-U, MERRA-2, and MSWEP-2.2 for 1,548 well gauged river basins. The relative errors in multiple conventional precipitation products and their corresponding discharges are analysed for the period of 2002-2013. The results reveal positive linear P-Q error relationships at annual and monthly timescales, and the stronger linearity for larger temporal accumulations. Precipitation errors can be doubled in simulated annual accumulated discharge. Moreover, precipitation errors are strongly dampened in basins characterized by temperate and continental climate regimes, particularly for peak discharges, showing highly nonlinear relationships. Radar-based precipitation product consistently shows dampening effects on error propagation through discharge simulations at different accumulation timescales compared to the other precipitation products. Although basin size and topography also influence the P-Q error relationship and propagation of precipitation errors, their roles depend largely on precipitation products, seasons and climate regimes.

Open access
Jefferson S. Wong, Xuebin Zhang, Shervan Gharari, Rajesh R. Shrestha, Howard S. Wheater, and James S. Famiglietti

Abstract

Obtaining reliable water balance estimates remains a major challenge in Canada for large regions with scarce in situ measurements. Various remote sensing products can be used to complement observation-based datasets and provide an estimate of the water balance at river basin or regional scales. This study provides an assessment of the water balance using combinations of various remote sensing– and data assimilation–based products and quantifies the nonclosure errors for river basins across Canada, ranging from 90 900 to 1 679 100 km2, for the period from 2002 to 2015. A water balance equation combines the following to estimate the monthly water balance closure: multiple sources of data for each water budget component, including two precipitation products—the global product WATCH Forcing Data ERA-Interim (WFDEI), and the Canadian Precipitation Analysis (CaPA); two evapotranspiration products—MODIS, and Global Land surface Evaporation: The Amsterdam Methodology (GLEAM); one source of water storage data—GRACE from three different centers; and observed discharge data from hydrometric stations (HYDAT). The nonclosure error is attributed to the different data products using a constrained Kalman filter. Results show that the combination of CaPA, GLEAM, and the JPL mascon GRACE product tended to outperform other combinations across Canadian river basins. Overall, the error attributions of precipitation, evapotranspiration, water storage change, and runoff were 36.7%, 33.2%, 17.8%, and 12.2%, which corresponded to 8.1, 7.9, 4.2, and 1.4 mm month−1, respectively. In particular, the nonclosure error from precipitation dominated in Western Canada, whereas that from evapotranspiration contributed most in the Mackenzie River basin.

Open access
Mohammad Reza Ehsani, Ali Behrangi, Abishek Adhikari, Yang Song, George J. Huffman, Robert F. Adler, David T. Bolvin, and Eric J. Nelkin

Abstract

Precipitation retrieval is a challenging topic, especially in high latitudes (HL), and current precipitation products face ample challenges over these regions. This study investigates the potential of the Advanced Very High Resolution Radiometer (AVHRR) for snowfall retrieval in HL using CloudSat radar information and machine learning (ML). With all the known limitations, AVHRR observations should be considered for HL snowfall retrieval because 1) AVHRR data have been continuously collected for about four decades on multiple platforms with global coverage, and similar observations will likely continue in the future; 2) current passive microwave satellite precipitation products have several issues over snow and ice surfaces; and 3) good coincident observations between AVHRR and CloudSat are available for training ML algorithms. Using ML, snowfall rate was retrieved from AVHRR’s brightness temperature and cloud probability, as well as auxiliary information provided by numerical reanalysis. The results indicate that the ML-based retrieval algorithm is capable of detection and estimation of snowfall with comparable or better statistical scores than those obtained from the Atmospheric Infrared Sounder (AIRS) and two passive microwave sensors contributing to the Global Precipitation Measurement (GPM) mission constellation. The outcomes also suggest that AVHRR-based snowfall retrievals are spatially and temporally reasonable and can be considered as a quantitatively useful input to the merged precipitation products that require frequent sampling or long-term records.

Open access
Ben S. Pickering, Steven Best, David Dufton, Maryna Lukach, Darren Lyth, and Ryan R. Neely III

Abstract

This study aims to verify the skill of a radar-based surface precipitation type (SPT) product with observations on the ground. Social and economic impacts can occur from SPT because it is not well forecast or observed. Observations from the Met Office’s weather radar network are combined with postprocessed numerical weather prediction (NWP) freezing-level heights in a Boolean logic algorithm to create a 1-km resolution Cartesian-gridded map of SPT. Here 5 years of discrete nonprobabilistic outputs of rain, mixed-phase, and snow are compared against surface observations made by trained observers, automatic weather stations, and laser disdrometers. The novel skill verification method developed as part of this study employs several tolerances of space and time from the SPT product, indicating the precision of the product for a desired accuracy. In general the results indicate that the tolerance verification method works well and produces reasonable statistical score ranges grounded in physical constraints. Using this method, we find that the mixed precipitation class is the least well diagnosed, which is due to a negative bias in the input temperature height field, resulting in rain events frequently being classified as mixed. Snow is captured well by the product, which is entirely reliant upon a postprocessed NWP temperature field, although a single period of anomalously cold temperatures positively skewed snow scores with low-skill events. Furthermore, we conclude that more verification consistency is needed among studies to help identify successful approaches and thus improve SPT forecasts.

Open access
Carlo Montes, Nachiketa Acharya, S. M. Quamrul Hassan, and Timothy J. Krupnik

Abstract

Extreme precipitation events are a serious threat to societal well-being over rainy areas such as Bangladesh. The reliability of studies of extreme events depends on data quality and their spatial and temporal distribution, although these subjects remain with knowledge gaps in many countries. This work focuses on the analysis of four satellite-based precipitation products for monitoring intense rainfall events: the Climate Hazards Group Infrared Precipitation with Station Data (CHIRPS), the PERSIANN–Climate Data Record (PERSIANN-CDR), the Integrated Multisatellite Retrievals (IMERG), and the CPC morphing technique (CMORPH). Five indices of intense rainfall were considered for the period 2000–19 and a set of 31 rain gauges for evaluation. The number and amount of precipitation associated with intense rainfall events are systematically underestimated or overestimated throughout the country. While random errors are higher over the wetter and higher-elevation northeastern and southeastern parts of Bangladesh, biases are more homogeneous. CHIRPS, PERSIANN-CDR, and IMERG perform similar for capturing total seasonal rainfall, but variability is better represented by CHIRPS and IMERG. Better results were obtained by IMERG, followed by PERSIANN-CDR and CHIRPS, in terms of climatological intensity indices based on percentiles, although the three products exhibited systematic errors. IMERG and CMORPH systematically overestimate the occurrence of intense precipitation events. IMERG showed the best performance representing events over a value of 20 mm day−1; CMORPH exhibited random and systematic errors strongly associated with a poor representation of interannual variability in seasonal total rainfall. The results suggest that the datasets have different potential uses and such differences should be considered in future applications regarding extreme rainfall events and risk assessment in Bangladesh.

Open access
Martyn P. Clark, Reza Zolfaghari, Kevin R. Green, Sean Trim, Wouter J. M. Knoben, Andrew Bennett, Bart Nijssen, Andrew Ireson, and Raymond J. Spiteri

Abstract

The intent of this paper is to encourage improved numerical implementation of land models. Our contributions in this paper are twofold. First, we present a unified framework to formulate and implement land model equations. We separate the representation of physical processes from their numerical solution, enabling the use of established robust numerical methods to solve the model equations. Second, we introduce a set of synthetic test cases (the laugh tests) to evaluate the numerical implementation of land models. The test cases include storage and transmission of water in soils, lateral subsurface flow, coupled hydrological and thermodynamic processes in snow, and cryosuction processes in soil. We consider synthetic test cases as “laugh tests” for land models because they provide the most rudimentary test of model capabilities. The laugh tests presented in this paper are all solved with the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model implemented using the Suite of Nonlinear and Differential/Algebraic Equation Solvers (SUNDIALS). The numerical simulations from SUMMA/SUNDIALS are compared against 1) solutions to the synthetic test cases from other models documented in the peer-reviewed literature, 2) analytical solutions, and 3) observations made in laboratory experiments. In all cases, the numerical simulations are similar to the benchmarks, building confidence in the numerical model implementation. We posit that some land models may have difficulty in solving these benchmark problems. Dedicating more effort to solving synthetic test cases is critical in order to build confidence in the numerical implementation of land models.

Open access
H. A. Titley, H. L. Cloke, S. Harrigan, F. Pappenberger, C. Prudhomme, J. C. Robbins, E. M. Stephens, and E. Zsoter

Abstract

Knowledge of the key drivers of the severity of river flooding from tropical cyclones (TCs) is vital for emergency preparedness and disaster risk reduction activities. This global study examines landfalling TCs in the decade from 2010 to 2019, to identify those characteristics that influence whether a storm has an increased flood hazard. The highest positive correlations are found between flood severity and the total precipitation associated with the TC. Significant negative correlations are found between flood severity and the translation speed of the TC, indicating that slower moving storms, that rain over an area for longer, tend to have higher flood severity. Larger and more intense TCs increase the likelihood of having a larger area affected by severe flooding but not its duration or magnitude, and it is found that the fluvial flood hazard can be severe in all intensity categories of TC, including those of tropical storm strength. Catchment characteristics such as antecedent soil moisture and slope also play a role in modulating flood severity, and severe flooding is more likely in cases where multiple drivers are present. The improved knowledge of the key drivers of fluvial flooding in TCs can help to inform research priorities to help with flood early warning, such as increasing the focus on translation speed in model evaluation and impact-based forecasting.

Open access
Kamil Mroz, Mario Montopoli, Alessandro Battaglia, Giulia Panegrossi, Pierre Kirstetter, and Luca Baldini

Abstract

Surface snowfall rate estimates from the Global Precipitation Measurement (GPM) mission’s Core Observatory sensors and the CloudSat radar are compared to those from the Multi-Radar Multi-Sensor (MRMS) radar composite product over the continental United States during the period from November 2014 to September 2020. The analysis includes the Dual-Frequency Precipitation Radar (DPR) retrieval and its single-frequency counterparts, the GPM Combined Radar Radiometer Algorithm (CORRA), the CloudSat Snow Profile product (2C-SNOW-PROFILE), and two passive microwave retrievals, i.e., the Goddard Profiling algorithm (GPROF) and the Snow Retrieval Algorithm for GMI (SLALOM). The 2C-SNOW retrieval has the highest Heidke skill score (HSS) for detecting snowfall among the products analyzed. SLALOM ranks second; it outperforms GPROF and the other GPM algorithms, all detecting only 30% of the snow events. Since SLALOM is trained with 2C-SNOW, it suggests that the optimal use of the information content in the GMI observations critically depends on the precipitation training dataset. All the retrievals underestimate snowfall rates by a factor of 2 compared to MRMS. Large discrepancies (RMSE of 0.7–1.5 mm h−1) between spaceborne and ground-based snowfall rate estimates are attributed to the complexity of the ice scattering properties and to the limitations of the remote sensing systems: the DPR instrument has low sensitivity, while the radiometric measurements are affected by the confounding effects of the background surface emissivity and of the emission of supercooled liquid droplet layers.

Open access