Search Results

You are looking at 51 - 60 of 17,979 items for :

  • Model performance/evaluation x
  • All content x
Clear All
Hamish A. Ramsay, Savin S. Chand, and Suzana J. Camargo

simulations, six models are evaluated: HiRAM (hereafter HIRAM), CMCC-ECHAM5 (hereafter CMCC), GISS, CAM5.1 (hereafter CAM5), FSU COAPS (hereafter FSU), and GFS ( Table 1 ), with the first four of these models also used for evaluating the downscaled tracks. Following the approach of Held and Zhao (2011) , these simulations contained four climate scenarios: a controlled twentieth-century climate and the three idealized warming climate scenarios. The control climate (20C) was forced with climatological

Full access
Isidora Jankov, Lewis D. Grasso, Manajit Sengupta, Paul J. Neiman, Dusanka Zupanski, Milija Zupanski, Daniel Lindsey, Donald W. Hillger, Daniel L. Birkenheuer, Renate Brummer, and Huiling Yuan

1. Introduction Some of the recent activities at the Cooperative Institute for Research in the Atmosphere (CIRA) have been related to the development of synthetic satellite imagery ( Greenwald et al. 2002 ; Grasso and Greenwald 2004 ; Grasso et al. 2008 ). The motivation for this activity was to evaluate the performance of a numerical weather prediction model using synthetic satellite imagery. Synthetic imagery was produced from the European Center for Medium-Range Weather Forecasts (ECMWF

Full access
Hua Song, Wuyin Lin, Yanluan Lin, Audrey B. Wolf, Roel Neggers, Leo J. Donner, Anthony D. Del Genio, and Yangang Liu

locally generated. The models lack such hydrometeor sources because large-scale forcings of hydrometeors are not available from the observations. 5. Summary This study quantitatively evaluates the statistical performances of the seven SCMs by comparing simulated precipitation with observations from 1999–2001 at the ARM SGP site. The 3-yr-long evaluation permits improved statistical evaluation of many aspects. It is found that although most SCMs can reproduce the observed total precipitation reasonably

Full access
Jian Li, Haoming Chen, Xinyao Rong, Jingzhi Su, Yufei Xin, Kalli Furtado, Sean Milton, and Nina Li

simulated at high resolution, but considerable biases remain in the high-resolution model. Previous studies focusing on the model performance for extreme events generally evaluate the quantitative statistical features of extreme precipitation based on long-term model outputs. Since the rainfall intensity is usually underestimated in climate models ( Dai et al. 2007 ; Li et al. 2015 ), the evaluation based on relative thresholds can hardly represent the extreme event in nature. The severity of the

Open access
Robert Nedbor-Gross, Barron H. Henderson, Justin R. Davis, Jorge E. Pachón, Alexander Rincón, Oscar J. Guerrero, and Freddy Grajales

1. Introduction Meteorological model performance is critical for successful air quality modeling and necessary for regulatory purposes. The standard model performance evaluation (sMPE) techniques for judging model fidelity are based on statistical thresholds developed by the community from various regional studies such as Emery et al. (2001) . However, these types of evaluations are likely to show failure for high-resolution modeling in regions of complex topography. Despite sMPE failure, a

Full access
Mark Decker, Michael A. Brunke, Zhuo Wang, Koichi Sakaguchi, Xubin Zeng, and Michael G. Bosilovich

directly with near-surface observations over various climate regimes and regions. Specifically, the atmospheric quantities used to force land surface models together with the fluxes of energy and moisture from the land surface to the atmosphere in the various reanalyses must be objectively evaluated. This study utilizes flux tower observations from 33 different locations in the Northern Hemisphere to evaluate the various reanalysis products from the different centers. The flux network (FLUXNET) is

Full access
Huaqing Cai and Robert E. Dumais Jr.

compiles and compares single-object attribute statistics from both forecasts and observations without matching each individual forecast object with its corresponding observed object; the former needs to match the forecast objects with observed objects first, then calculate performance statistics such as the percentage of forecast objects that matched with observed objects. Davis et al. (2006a , b) showed that both performance metrics are useful for evaluating NWP model storm forecast performance

Full access
Kyoung-Ho Cho, Yan Li, Hui Wang, Kwang-Soon Park, Jin-Yong Choi, Kwang-Il Shin, and Jae-Il Kwon

high-frequency (HF) radar-derived surface currents ( Ullman et al. 2006 ; Abascal et al. 2012 ). It is significant that the performance of a SAR model requires assessment with observed drifter trajectories. Trajectory assessment has been studied in various ways: using a spaghetti diagram ( Toner et al. 2001 ; Nairn and Kawase 2001 ), statistical separation ( Thompson et al. 2003 ), and circle assessment ( Furnans et al. 2005 ). The use of a circle assessment is designed to evaluate how well a

Full access
Christoph Schlager, Gottfried Kirchengast, and Jürgen Fuchsberger

. 2005 ). The operational requirement for our application is that the wind fields are automatically generated from the observational data of the WegenerNet in near–real time and stored to the WegenerNet archives with a spatial resolution of 100 m × 100 m and a time resolution of 30 min. Furthermore, the model performance of these produced wind fields has to be evaluated for periods with representative weather conditions. Reporting this work, the paper is structured as follows. Section 2 provides a

Full access
David Gampe, Josef Schmid, and Ralf Ludwig

future projections for precipitation? How large is the contribution of the reference dataset selection to the overall uncertainty? A simple ranking scheme based on validity is introduced, where the performance of each RCM as evaluated by each of the included reference datasets is assessed. The RCM ensemble is then bias corrected using each of the reference datasets, and the resulting climate change signals for the selected models are presented. To quantify the uncertainty introduced by the selection

Full access