Search Results

You are looking at 21 - 30 of 17,989 items for :

  • Model performance/evaluation x
  • All content x
Clear All
Edward D. Zaron and Shane Elipot

taken here uses variance reduction statistics to evaluate model performance. Thus, a partial tide prediction is computed for a single constituent, say, M 2 , and this prediction is subtracted from the observations. The variance of the residual, and the difference in variance compared to the original, may then be compared among the models. An advantage of this approach is that it permits the use of more types of data for model intercomparison than could be used for comparisons of harmonic constants

Restricted access
Jonathan E. Pleim

still essential to validate overall performance of three-dimensional modeling systems with the ACM2 used for the PBL parameterization. Section 2 describes the implementation of the ACM2 in the MM5, including the detailed formulation of the eddy diffusivities used and the numerical integration techniques. The specifics of the MM5 simulations used for testing and evaluation are summarized in section 3 . This section also presents the evaluation of the MM5 applications of the ACM2 through comparison

Full access
A. M. Ukkola, A. J. Pitman, M. G. De Kauwe, G. Abramowitz, N. Herger, J. P. Evans, and M. Decker

) for all soil layers within the top 3 m and (where applicable) any layer partly within 3 m weighted by the fraction of this layer located above 3 m. b. Observations We used two observed precipitation products to evaluate model performance. These were global monthly time series products by 1) the Climatic Research Unit (CRU TS 3.23; Harris et al. 2014 ) and 2) Global Precipitation Climatology Centre (GPCC, version 7; Schneider et al. 2016 ). Both products are available at a 0.5° spatial resolution

Full access
Kristie J. Franz, Terri S. Hogue, and Soroosh Sorooshian

evaluation of a model requires three critical elements: a performance criterion, a benchmark, and an outcome. Performance criterion refers to the ability to match the desired variable being modeled; in this instance the variables of interest are simulated snow water equivalent (SWE), melt, and discharge. The benchmark is an alternative to the model being evaluated. Given forecasting as the proposed application of the SAST, our benchmark is identified as the operational National Weather Service (NWS) SNOW

Full access
Tomislava Vukicevic, Isidora Jankov, and John McGinley

-season mesoscale convective systems ( Jankov et al. 2007a ) and cold-season orographical forcing ( Jankov et al. 2007b ). In the current study we present a technique that unifies evaluation of the forecast uncertainties produced either by initial conditions or different model versions, or both. The technique consists of first diagnosing the performance of the forecast ensemble, which is based on explicit use of the analysis uncertainties, and then optimizing the ensemble forecast using results of the diagnosis

Full access
Jiali Wang and Veerabhadra R. Kotamarthi

R-2 data. All experiments employed the same model domain, relaxation zones, physics options, initial conditions, and boundary conditions, and the simulation periods were identical to that of the control experiment. In section 3 , we use the experiment numbers shown in Table 1 to represent different nudging experiments. Table 1. Summary of experimental design. 3. Results For a better evaluation of model performance, we divided the portion of the CONUS without oceans and lakes (30°–49°N, 122

Full access
Wenyi Xie, Xiankui Zeng, Dongwei Gui, Jichun Wu, and Dong Wang

. 2003 ), the Common Land Model (CLM) ( Dai et al. 2003 ), Variable Infiltration Capacity (VIC) ( Liang et al. 1994 ), and Mosaic ( Koster and Suarez 1996 ). These LSMs use the meteorological forcing dataset to simulate land surface fluxes and states with a 15-min time step. Zaitchik et al. (2010) used a source-to-sink routing method and global river discharge data to evaluate the performance of the four LSM datasets, and found that the evaluation results were greatly affected by the errors of the

Restricted access
Taylor A. McCorkle, John D. Horel, Alexander A. Jacques, and Trevor Alcott

accompanied by as much as a 35°C temperature increase over 36 h within the Fort Greely mesonet available via MesoWest. The rapid warming and onset of the downslope windstorm allows for an evaluation of the model’s performance when weather conditions are rapidly changing. Downslope windstorms have been studied extensively at various locales across Alaska ( Murray 1956 ; Colman and Dierking 1992 ; Overland and Bond 1993 ; Hopkins 1994 ; Nance and Colman 2000 ) and in the continental United States

Full access
Keith M. Hines and David H. Bromwich

–Monteith formula may saturate the boundary layer and induce spurious water clouds. Overall, the surface radiation balance for June 2001 appears to be better simulated with Polar WRF than with Polar MM5. The June 2001 simulations were also evaluated by statistical comparison to AWS observations for June 2001. Table 4 shows the average of the model performance statistics for the AWS locations in Table 1 . Up to 12 stations are available for the June 2001 average. The sites JAR2 and JAR3 are excluded in the

Full access
Weiguo Wang, William J. Shaw, Timothy E. Seiple, Jeremy P. Rishel, and Yulong Xie

are among the most important variables that determine where the pollutants will be carried. In this regard, overall statistical evaluation may be insufficient to assess whether a model could be useful to help decision makers respond to individual events. Therefore, we further examined the performance of CALMET by comparing the trajectories of individual pollutant parcels driven by the wind fields from CALMET and from the reference data. Trajectory analyses integrate the effects of temporally and

Full access