Search Results

You are looking at 1 - 1 of 1 items for :

  • Author or Editor: John W. Miles x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Bryan G. White, Jan Paegle, W. James Steenburgh, John D. Horel, Robert T. Swanson, Louis K. Cook, Daryl J. Onton, and John G. Miles

Abstract

The short-term forecast accuracy of six different forecast models over the western United States is described for January, February, and March 1996. Four of the models are operational products from the National Centers for Environmental Prediction (NCEP) and the other two are research models with initial and boundary conditions obtained from NCEP models. Model resolutions vary from global wavenumber 126 (∼100 km equivalent horizontal resolution) for the Medium Range Forecast model (MRF) to about 30 km for the Meso Eta, Utah Local Area Model (Utah LAM), and Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 5 (MM5). Forecast errors are described in terms of bias error and mean square error (mse) as computed relative to (i) gridded objective analyses and (ii) rawinsonde observations. Bias error and mse fields computed relative to gridded analyses show considerable variation from model to model, with the largest errors produced by the most highly resolved models. Using this approach, it is impossible to separate real forecast errors from possibly correct, highly detailed forecast information because the forecast grids are of higher resolution than the observations used to generate the gridded analyses. Bias error and mse calculated relative to rawinsonde observations suggest that the Meso Eta, which is the most highly resolved and best developed operational model, produces the most accurate forecasts at 12 and 24 h, while the MM5 produces superior forecasts relative to the Utah LAM. At 36 h, the MRF appears to produce superior mass and wind field forecasts. Nevertheless, a preliminary validation of precipitation performance for fall 1997 suggests the more highly resolved models exhibit superior skill in predicting larger precipitation events. Although such results are valid when skill is averaged over many simulations, forecast errors at individual rawinsonde locations, averaged over subsets of the total forecast period, suggest more variability in forecast accuracy. Time series of local forecast errors show large variability from time to time and generally similar maximum error magnitudes among the different models.

Full access