Search Results

You are looking at 1 - 10 of 13 items for :

  • Model performance/evaluation x
  • 12th International Precipitation Conference (IPC12) x
  • User-accessible content x
Clear All
Giuseppe Mascaro

fixed. For given N val and N τ , steps 3 and 4 are repeated 10 times to characterize the sampling variability of the daily gauges. Overall, 10 × 10 × 8 × 10 = 8000 Monte Carlo iterations are performed. The application of the IDF models at ungauged sites (i.e., the N val validation gauges) involves the spatial interpolation of the model parameters (see Table 1 and section 3b ), which can be achieved with several techniques (see, e.g., Watkins et al. 2005 ). Since evaluating the performance

Restricted access
Abby Stevens, Rebecca Willett, Antonios Mamalakis, Efi Foufoula-Georgiou, Alejandro Tejedor, James T. Randerson, Padhraic Smyth, and Stephen Wright

-LENS simulations are produced on a different spatial grid from that of the SST observations, we interpolated late summer and early fall SSTs from CESM-LENS linearly onto the observation grid. We emphasize that the CESM-LENS outputs are used only to estimate the covariance matrix of the predictors, while the training of our model (estimation of the regression parameters and coefficients) and its performance evaluation (see section 4 ) are always performed using the observed SST and precipitation series in the

Open access
Clément Guilloteau, Antonios Mamalakis, Lawrence Vulis, Phong V. V. Le, Tryphon T. Georgiou, and Efi Foufoula-Georgiou

interpretability. 5. Conclusions The need for understanding patterns of variability and change in climate signals for the purpose of predictive and diagnostic analysis (e.g., for regional prediction, untangling the forced signal from internal variability, and diagnosing the performance of climate models) has never been more imperative. Classical PCA is a well-developed mathematical analysis tool that has been used extensively in climate studies. Its extension in the Fourier frequency domain, the spectral PCA

Open access
Samantha H. Hartke, Daniel B. Wright, Dalia B. Kirschbaum, Thomas A. Stanley, and Zhe Li

of localized error modeling; regional variability resulting from sampling error (e.g., insufficient record length to sample the extreme tail of the local precipitation distribution) would argue in favor of a regional approach. The consequence of differences between regional and localized CSGD error models is not readily apparent in comparing the CSGD model results (i.e., Fig. 5c ), but is best evaluated through the resulting performance of probabilistic LHASA, which is discussed below. More work

Restricted access
Veljko Petković, Marko Orescanin, Pierre Kirstetter, Christian Kummerow, and Ralph Ferraro

depends on the representativeness, quantity and quality of the training dataset. To establish a baseline model and evaluate the performance of the approach we propose a relatively simple scheme and a widely available satellite dataset. Detailed descriptions of the datasets and DNN model are given below. a. Instruments and data This study employs 2 years, from September 2014 to August 2015 and from January to December 2017, of the GPM core satellite global observations (66°S–66°N) to explore accuracy

Full access
Phu Nguyen, Mohammed Ombadi, Vesta Afzali Gorooh, Eric J. Shearer, Mojtaba Sadeghi, Soroosh Sorooshian, Kuolin Hsu, David Bolvin, and Martin F. Ralph

to perform evaluations of precipitation data because of its high spatiotemporal resolution, multidecadal extent, and proven accuracy from manual quality control ( Beck et al. 2019 ). In the present study, Stage IV (hereafter referred to as ST4) is used as a baseline over the contiguous United States (CONUS) against which the performance of PDIR-Now and PERSIANN-CCS is benchmarked. 2) IMERG final run IMERG is a half-hourly 0.1° × 0.1° precipitation dataset that uses both PMW and IR data. The

Open access
Efi Foufoula-Georgiou, Clement Guilloteau, Phu Nguyen, Amir Aghakouchak, Kuo-Lin Hsu, Antonio Busalacchi, F. Joseph Turk, Christa Peters-Lidard, Taikan Oki, Qingyun Duan, Witold Krajewski, Remko Uijlenhoet, Ana Barros, Pierre Kirstetter, William Logan, Terri Hogue, Hoshin Gupta, and Vincenzo Levizzani

, precipitation with its notorious complexity offers the stringiest quantitative criterion to evaluate climate model advances ( Tapiador et al. 2018 ). History of international precipitation conferences The International Precipitation Conferences (IPCs) have a long history of successfully bringing together the international community to discuss the newest research findings, integrate research, discuss challenges and opportunities, and craft new directions on precipitation research within a broad

Full access
Shruti A. Upadhyaya, Pierre-Emmanuel Kirstetter, Jonathan J. Gourley, and Robert J. Kuligowski

observations and MWCOMB rates in order to detect and quantify precipitation at a spatial scale of ~2 km at nadir and 5-min temporal resolution across the conterminous United States (CONUS) (15 min across North and South America). The advancement in algorithms demands the assessment of their performance. Evaluating the accuracy of satellite precipitation products has always been one of the primary objectives of the International Precipitation Working Group (IPWG: http://www.isac.cnr.it/~ipwg/ ). This is

Free access
Alberto Ortolani, Francesca Caparrini, Samantha Melani, Luca Baldini, and Filippo Giannetti

the present experiments the results suggest that N = 100 is satisfactory, since a higher number ( N = 500) leads to similar errors (on the contrary, N = 50 gives poorer estimations). This is encouraging for the feasibility of the method, but further evaluation is needed when applied to larger domains with different configurations and real data. Other synthetic experiments have been performed, with the same cylindrical storm but changing true and modeled advection velocities, obtaining similar

Open access
Nobuyuki Utsumi, F. Joseph Turk, Ziad S. Haddad, Pierre-Emmanuel Kirstetter, and Hyungjun Kim

et al. 2007 ). Interestingly, on the other hand, GPROF shows overestimations for the same range (0.1–1 mm h −1 ) except for snow surfaces ( Table 1 ). The reason of the opposite biases of EPC and GPROF for weak precipitation range needs further investigation in the future study. Note that very light precipitation range (less than 0.1 mm h −1 ) is not included in the assessment ( Table 1 ). The performance evaluation for such very light range is more prone to the precipitation detection skill of

Open access