Search Results

You are looking at 1 - 10 of 1,605 items for :

  • Model performance/evaluation x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
R. Padilla-Hernández, W. Perrie, B. Toulany, and P. C. Smith

observations. They used a model spatial resolution of 1/12° × 1/12° in order to capture the rapidly varying wave field generated by Hurricane Bonnie, which is much finer than is typically implemented in operational forecasting. For example, the fine-resolution wave model grid for the Gulf of Maine Ocean Observing System (GoMOOS; information online at ) is 0.2° × 0.2°. In this study, three modern widely used third-generation spectral wave models are evaluated: (a) the Simulating Waves

Full access
Taylor A. McCorkle, John D. Horel, Alexander A. Jacques, and Trevor Alcott

accompanied by as much as a 35°C temperature increase over 36 h within the Fort Greely mesonet available via MesoWest. The rapid warming and onset of the downslope windstorm allows for an evaluation of the model’s performance when weather conditions are rapidly changing. Downslope windstorms have been studied extensively at various locales across Alaska ( Murray 1956 ; Colman and Dierking 1992 ; Overland and Bond 1993 ; Hopkins 1994 ; Nance and Colman 2000 ) and in the continental United States

Full access
Nicole Mölders

performance of the WRF than other models, however, may be an artifact of climatologically low wind speed in interior Alaska in general and in June 2005 in particular; the June average wind speed for Fairbanks is 3.3 m s −1 . 5. Fire indices evaluation WRF data and observation derived fire indices do not differ significantly. For all fire indices, the spatial standard deviation increases with time for the predictions and observations because the level of fire danger develops differently at the various

Full access
Wenqing Zhang, Lian Xie, Bin Liu, and Changlong Guan

1. Introduction The track, intensity, and size of tropical cyclones (TCs) have been used as evaluation parameters in assessing TC forecasts or the performance of TC numerical forecast models since the first attempts were made at forecasting TCs in the Atlantic region in the 1870s ( Sheets 1990 ). For instance, Neumann and Pelissier (1981) analyzed Atlantic tropical cyclone forecast errors in track and intensity, separately. Liu and Xie (2012) used errors in track, intensity, and size to

Full access
Steven A. Lack, George L. Limpert, and Neil I. Fox

. However, these scores alone can be misleading, especially in high-resolution models. A finescale convective product may show skill as part of a decision process that is not captured by these standard statistics; these common metrics may even show zero skill when calculated. Additional metrics are then needed to provide insights into the evaluation process. Object-oriented methods, also referred to as feature-based approaches, can be used in a supplementary nature to common metrics in an evaluation

Full access
Christoph Schlager, Gottfried Kirchengast, and Jürgen Fuchsberger

. 2005 ). The operational requirement for our application is that the wind fields are automatically generated from the observational data of the WegenerNet in near–real time and stored to the WegenerNet archives with a spatial resolution of 100 m × 100 m and a time resolution of 30 min. Furthermore, the model performance of these produced wind fields has to be evaluated for periods with representative weather conditions. Reporting this work, the paper is structured as follows. Section 2 provides a

Full access
Temple R. Lee, Michael Buban, David D. Turner, Tilden P. Meyers, and C. Bruce Baker

simulate near-surface exchange processes requires careful and thorough evaluation of the model output to identify and correct potential model biases. We focused our investigation on the southeast United States, where the only known evaluation of the HRRR’s performance is a recent study by Wagner et al. (2019) that used observations from the Atmospheric Emitted Radiance Interferometer (AERI; Knuteson et al. 2004 ; Turner and Blumberg 2019 ) installed on the Collaborative Lower Atmosphere Mobile

Full access
Huaqing Cai and Robert E. Dumais Jr.

compiles and compares single-object attribute statistics from both forecasts and observations without matching each individual forecast object with its corresponding observed object; the former needs to match the forecast objects with observed objects first, then calculate performance statistics such as the percentage of forecast objects that matched with observed objects. Davis et al. (2006a , b) showed that both performance metrics are useful for evaluating NWP model storm forecast performance

Full access
Russell L. Elsberry, Tara D. B. Lambert, and Mark A. Boothe

demonstrated because DSHIPS predicted 12 of the 18 rapid decay episodes whereas SHIPS predicted only four episodes ( Table 4 ). The dynamical models GFDI and GFNI predicted 8 and 10 of the 18 rapid decay episodes. The NHC had the best performance in predicting 13 of the 18 rapid decay episodes within ±12 h of the actual time. The surprising result from this evaluation is the large number of false alarms by SHIPS, DSHIPS, GFDI, and GFNI with 17, 28, 24, and 23 episodes, respectively. Using this guidance

Full access
Ming Liu, Jason E. Nachamkin, and Douglas L. Westphal

measurements at the ARM SGP site and against the standard COAMPS Harshvardhan radiation parameterization. Fu–Liou outperforms the standard model for both shortwave and longwave radiative fluxes, exhibiting bias and RMS scores 40%–50% of the standard model. The Fu–Liou also demonstrates stable performance over a 5-day forecast and significantly surpasses the standard model in the verification. The new model is then evaluated in a cloudy case consisting of a 15-day period at the ARM SGP site. Nine

Full access