Search Results

You are looking at 11 - 20 of 17,971 items for :

  • Model performance/evaluation x
  • All content x
Clear All
R. Padilla-Hernández, W. Perrie, B. Toulany, and P. C. Smith

observations. They used a model spatial resolution of 1/12° × 1/12° in order to capture the rapidly varying wave field generated by Hurricane Bonnie, which is much finer than is typically implemented in operational forecasting. For example, the fine-resolution wave model grid for the Gulf of Maine Ocean Observing System (GoMOOS; information online at www.gomoos.org ) is 0.2° × 0.2°. In this study, three modern widely used third-generation spectral wave models are evaluated: (a) the Simulating Waves

Full access
Qing Yang, Larry K. Berg, Mikhail Pekour, Jerome D. Fast, Rob K. Newsom, Mark Stoelinga, and Catherine Finley

features in the UW scheme are likely to improve boundary layer wind predictions. The goal of this research is to characterize the ramp occurrence over the CBWES site that is within an operating wind farm, to evaluate the WRF model's capability in ramp prediction, and to test the sensitivity of the model's performance to the choice of PBL schemes. Results of this study are also intended to provide recommendations to the wind-energy community regarding the choice of PBL schemes in WRF in areas of complex

Full access
Mohammad Safeeq, Guillaume S. Mauger, Gordon E. Grant, Ivan Arismendi, Alan F. Hamlet, and Se-Yeun Lee

important to evaluate and assess whether any model inherits systematic biases, whether these are more prevalent in some landscapes than others, and whether these biases can be reduced to improve model performance. Any evaluation of bias should also address how the choice of model ( Vano et al. 2012 ), meteorological data ( Elsner et al. 2014 ), or even parameterization scheme ( Tague et al. 2013 ) affects model behavior. Here, we examine the source of a range of parametric and structural uncertainties

Full access
Pius Lee, Daiwen Kang, Jeff McQueen, Marina Tsidulko, Mary Hart, Geoff DiMego, Nelson Seaman, and Paula Davidson

Atmospheric Chemistry, Seattle, WA, Amer. Meteor. Soc., J2.10 . Eder , B. , and S. Yu , 2006 : A performance evaluation of the 2004 release of Models-3 CMAQ. Atmos. Environ. , 40 , 4811 – 4824 . EPA , 2003 : User’s guide to MOBILE6.1 and MOBILE6.2 (Mobile Source Emission Factor Model). U.S. Environmental Protection Agency Rep. EPA420-R-03-010, 262 pp . EPA , cited . 2005 : 2005 summer ozone season—Archive. [Available online at http

Full access
Xiaoli Yang, Xiaohan Yu, Yuqian Wang, Xiaogang He, Ming Pan, Mengru Zhang, Yi Liu, Liliang Ren, and Justin Sheffield

full ensemble, which could be adopted by hydrological model for climate change effects assessment across China. This paper is organized as follows. Sections 2 and 3 describe the datasets used in the study as well as the statistical downscaling methods, the bias-correction method, the performance evaluation metrics, and the subensemble selection methods. Section 4 presents the main results, including evaluation of the model performance, optimization of the model subensemble, and its

Free access
M. J. Best, G. Abramowitz, H. R. Johnson, A. J. Pitman, G. Balsamo, A. Boone, M. Cuntz, B. Decharme, P. A. Dirmeyer, J. Dong, M. Ek, Z. Guo, V. Haverd, B. J. J. van den Hurk, G. S. Nearing, B. Pak, C. Peters-Lidard, J. A. Santanello Jr., L. Stevens, and N. Vuichard

relative errors, this type of analysis also identifies metrics for which one model performs better than another, or where errors in multiple models are systematic. This has the advantage over evaluation of giving a clear indication that performance improvements are achievable for those metrics where another model already performs better. For example, in Fig. 1b , metrics 4 and 11 apparently have the largest relative errors for both models A and B (and are hence likely to be flagged as development

Full access
Sara A. Rauscher, Todd D. Ringler, William C. Skamarock, and Arthur A. Mirin

not been so rigorously evaluated in idealized settings, or at a minimum that performance has not been well documented. As part of a hierarchical approach to test the veracity of global high-resolution and global variable-resolution simulations for regional modeling applications, we analyze a series of idealized, full-physics aquaplanet test cases produced using CAM version 5 coupled to the new MPAS dynamical core (CAM-MPAS). To provide context for this analysis, the new CAM-MPAS model is compared

Full access
Lei Zhang, YinLong Xu, ChunChun Meng, XinHua Li, Huan Liu, and ChangGui Wang

simulation relative to observation. The systematic comparisons of simulations are performed between GCMs and PRECIS, between statistical and dynamic downscaling, as well as raw outputs and bias-corrected outputs, at the spatial and temporal dimensions in the baseline period of 1961–90. An overall performance of one model can be evaluated by a comprehensive ranking metric (MR) ( Jiang et al. 2015 ; Ahmed et al. 2019 ), which is defined as (9) MR = 1 − ∑ i = 1 n rank i m n , where m is the number of

Free access
Noel C. Baker and Patrick C. Taylor

1. Introduction Efforts to standardize climate model experiments and collect simulation data—such as the Coupled Model Intercomparison Project (CMIP)—provide the means to intercompare and evaluate climate models. The evaluation of models using observations is a critical component of model assessment. Performance metrics—or the systematic determination of model biases—succinctly quantify aspects of climate model behavior. With the many global climate models that participate in CMIP, it is

Full access
Baozhang Chen, Jing M. Chen, Gang Mo, Chiu-Wai Yuen, Hank Margolis, Kaz Higuchi, and Douglas Chan

carbon fluxes. The purposes of this study are threefold: (i) to test and verify the capability and accuracy of the EASS model when coupled to a GCM model and applied to a large area with significant heterogeneity, such as the Canada’s landmass; (ii) to evaluate the effect of land-cover heterogeneity on regional energy, water, and carbon flux simulation based on remote sensing data; and (iii) to explore up-scaling methodologies using satellite-derived data. In this paper, we briefly present a basic

Full access