Search Results

You are looking at 1 - 9 of 9 items for :

  • Model performance/evaluation x
  • 12th International Precipitation Conference (IPC12) x
  • Journal of Hydrometeorology x
  • Refine by Access: All Content x
Clear All
Samantha H. Hartke, Daniel B. Wright, Dalia B. Kirschbaum, Thomas A. Stanley, and Zhe Li

of localized error modeling; regional variability resulting from sampling error (e.g., insufficient record length to sample the extreme tail of the local precipitation distribution) would argue in favor of a regional approach. The consequence of differences between regional and localized CSGD error models is not readily apparent in comparing the CSGD model results (i.e., Fig. 5c ), but is best evaluated through the resulting performance of probabilistic LHASA, which is discussed below. More work

Restricted access
Phu Nguyen, Mohammed Ombadi, Vesta Afzali Gorooh, Eric J. Shearer, Mojtaba Sadeghi, Soroosh Sorooshian, Kuolin Hsu, David Bolvin, and Martin F. Ralph

to perform evaluations of precipitation data because of its high spatiotemporal resolution, multidecadal extent, and proven accuracy from manual quality control ( Beck et al. 2019 ). In the present study, Stage IV (hereafter referred to as ST4) is used as a baseline over the contiguous United States (CONUS) against which the performance of PDIR-Now and PERSIANN-CCS is benchmarked. 2) IMERG final run IMERG is a half-hourly 0.1° × 0.1° precipitation dataset that uses both PMW and IR data. The

Open access
Zhe Li, Daniel B. Wright, Sara Q. Zhang, Dalia B. Kirschbaum, and Samantha H. Hartke

-based characterization to convolving radius, rainfall thresholds and matching rules are further discussed in section 5 . c. Evaluation metrics We first applied two conventional methods to evaluate the relative performance of IMERG-L and NU-WRF: storm total accumulations and scatterplots of pixel-scale hourly precipitation estimates. For the latter, we quantified the performance in terms of three widely used evaluation metrics, including the relative bias (RB), root-mean-square error (RMSE), and Pearson linear

Restricted access
Shruti A. Upadhyaya, Pierre-Emmanuel Kirstetter, Jonathan J. Gourley, and Robert J. Kuligowski

observations and MWCOMB rates in order to detect and quantify precipitation at a spatial scale of ~2 km at nadir and 5-min temporal resolution across the conterminous United States (CONUS) (15 min across North and South America). The advancement in algorithms demands the assessment of their performance. Evaluating the accuracy of satellite precipitation products has always been one of the primary objectives of the International Precipitation Working Group (IPWG: ). This is

Free access
Alberto Ortolani, Francesca Caparrini, Samantha Melani, Luca Baldini, and Filippo Giannetti

the present experiments the results suggest that N = 100 is satisfactory, since a higher number ( N = 500) leads to similar errors (on the contrary, N = 50 gives poorer estimations). This is encouraging for the feasibility of the method, but further evaluation is needed when applied to larger domains with different configurations and real data. Other synthetic experiments have been performed, with the same cylindrical storm but changing true and modeled advection velocities, obtaining similar

Open access
F. Joseph Turk, Sarah E. Ringerud, Yalei You, Andrea Camplani, Daniele Casella, Giulia Panegrossi, Paolo Sanò, Ardeshir Ebtehaj, Clement Guilloteau, Nobuyuki Utsumi, Catherine Prigent, and Christa Peters-Lidard

imager perspective: Observational analysis and precipitation retrieval evaluation . J. Atmos. Oceanic Technol. , 38 , 293 – 311 , . 10.1175/JTECH-D-20-0064.1 Mitrescu , C. , T. L’Ecuyer , J. Haynes , S. Miller , and F. J. Turk , 2010 : CloudSat precipitation profiling algorithm-model description . J. Appl. Meteor. Climatol. , 49 , 991 – 1003 , . 10.1175/2009JAMC2181.1 Mroz , K. , M. Montopoli

Restricted access
Nobuyuki Utsumi, F. Joseph Turk, Ziad S. Haddad, Pierre-Emmanuel Kirstetter, and Hyungjun Kim

et al. 2007 ). Interestingly, on the other hand, GPROF shows overestimations for the same range (0.1–1 mm h −1 ) except for snow surfaces ( Table 1 ). The reason of the opposite biases of EPC and GPROF for weak precipitation range needs further investigation in the future study. Note that very light precipitation range (less than 0.1 mm h −1 ) is not included in the assessment ( Table 1 ). The performance evaluation for such very light range is more prone to the precipitation detection skill of

Open access
Andrea Camplani, Daniele Casella, Paolo Sanò, and Giulia Panegrossi

water within the snowpack strongly enhances the absorption at the expense of the volume scattering ( Rott and Nagler 1995 ; Amlien 2008 ). Hewison and English (1999) developed a model representing the microwave emissivity spectra of sea ice and snow cover obtained by airborne measurements. Different behaviors have been observed for different types of snowpacks, with an evident decrease of the emissivity with increasing frequency (within MW) for dry snow, and a high and stable emissivity for fresh

Open access
Chandra Rupa Rajulapati, Simon Michael Papalexiou, Martyn P. Clark, Saman Razavi, Guoqiang Tang, and John W. Pomeroy

model driven has utility in poorly gauged regions. Climate is also associated with data product performance in describing extremes. Thus, the question that naturally arises is “which product is the best for studying extremes?” The results presented here suggest that MSWEP performs better, yet there is no single global product that works best for all regions and climates. As such, it is still advisable to include multiple products in a study in order to gain some insights into the uncertainties in

Restricted access