Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: Sanjib Sharma x
  • All content x
Clear All Modify Search
Fengyun Sun, Alfonso Mejia, Sanjib Sharma, Peng Zeng, and Yue Che

Abstract

Because downscaling tools are needed to support climate change mitigation and adaptation practices, the guarantee of their credibility is of vital importance. To evaluate downscaling results, one needs to select a set of effective and nonoverlapping indices that reflect key system attributes. However, this subject is still insufficiently researched. With this study, we propose a diagnostic framework that evaluates the credibility of precipitation downscaling using five different attributes: spatial, temporal, trend, extreme, and climate event. A daily variant of the bias-corrected spatial downscaling approach is used to downscale daily precipitation from the GFDL-ESM2G climate model at 148 stations in the Yangtze River basin in China. Results prove that this framework is effective in systematically evaluating the performance of downscaling across the Yangtze River basin in the context of climate change and exacerbating climate extremes. Moreover, results also indicate that the downscaling approach adopted in this study yields good performance in correcting spatiotemporal bias, preserving trends, approximating extremes, and characterizing climate events across the Yangtze River basin. The proposed framework can be beneficial to planners and engineers facing issues relevant to climate change assessment.

Restricted access
Xingchen Yang, Sanjib Sharma, Ridwan Siddique, Steven J. Greybush, and Alfonso Mejia

Abstract

The potential of Bayesian model averaging (BMA) and heteroscedastic censored logistic regression (HCLR) to postprocess precipitation ensembles is investigated. For this, outputs from the National Oceanic and Atmospheric Administration’s (NOAA’s) National Centers for Environmental Prediction (NCEP) 11-member Global Ensemble Forecast System Reforecast, version 2 (GEFSRv2), dataset are used. As part of the experimental setting, 24-h precipitation accumulations and forecast lead times of 24 to 120 h are used, over the mid-Atlantic region (MAR) of the United States. In contrast with previous postprocessing studies, a wider range of forecasting conditions is considered here when evaluating BMA and HCLR. Additionally, BMA and HCLR have not yet been compared against each other under a common and consistent experimental setting. To compare and verify the postprocessors, different metrics are used (e.g., skills scores and reliability diagrams) conditioned upon the forecast lead time, precipitation threshold, and season. Overall, HCLR tends to slightly outperform BMA but the differences among the postprocessors are not as significant. In the future, an alternative approach could be to combine HCLR with BMA to take advantage of their relative strengths.

Full access
Sanjib Sharma, Ridwan Siddique, Nicholas Balderas, Jose D. Fuentes, Seann Reed, Peter Ahnert, Robert Shedd, Brian Astifan, Reggina Cabrera, Arlene Laing, Mark Klein, and Alfonso Mejia

Abstract

The quality of ensemble precipitation forecasts across the eastern United States is investigated, specifically, version 2 of the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System Reforecast (GEFSRv2) and Short Range Ensemble Forecast (SREF) system, as well as NCEP’s Weather Prediction Center probabilistic quantitative precipitation forecast (WPC-PQPF) guidance. The forecasts are verified using multisensor precipitation estimates and various metrics conditioned upon seasonality, precipitation threshold, lead time, and spatial aggregation scale. The forecasts are verified, over the geographic domain of each of the four eastern River Forecasts Centers (RFCs) in the United States, by considering first 1) the three systems or guidance, using a common period of analysis (2012–13) for lead times from 1 to 3 days, and then 2) GEFSRv2 alone, using a longer period (2004–13) and lead times from 1 to 16 days. The verification results indicate that, across the eastern United States, precipitation forecast bias decreases and the skill and reliability improve as the spatial aggregation scale increases; however, all the forecasts exhibit some underforecasting bias. The skill of the forecasts is appreciably better in the cool season than in the warm one. The WPC-PQPFs tend to be superior, in terms of the correlation coefficient, relative mean error, reliability, and forecast skill scores, than both GEFSRv2 and SREF, but the performance varies with the RFC and lead time. Based on GEFSRv2, medium-range precipitation forecasts tend to have skill up to approximately day 7 relative to sampled climatology.

Full access