Search Results

You are looking at 1 - 2 of 2 items for :

  • Author or Editor: Jan Paegle x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Jennifer C. Roman, Gonzalo Miguez-Macho, Lee A. Byerle, and Jan Paegle

Abstract

Current limitations of atmospheric predictive skill are investigated through comparison of correlation and error statistics of operational and research global models for two winter seasons. In 1993, bias-corrected models produced anomaly correlations of 0.6 after 6.5–7 days, with relatively little forecast skill beyond that point. In 2003, the forecast skill of a more developed, higher-resolution operational model has been extended 36 h, while the skill of the unchanged, low-resolution research model has been extended 6 h. This implies more predictable patterns in 2003 or model and initial state improvements made since 1993. The relative importance of improved model resolution/physics and improved initial state to the lengthening of forecast skill is diagnosed through the evaluation of rms evolution of analyzed and forecast differences of 500-mb height and meridional wind. Results indicate that forecast sensitivity to initial data is less important than is the sensitivity to the model used. However, the sensitivity to model used (rms of model forecast differences) is smaller than the rms forecast error of either model, indicating model forecasts are more similar to each other than to reality. In 1993, anomaly correlations of model forecasts to each other reach 0.6 by roughly 8 days; that is, the models predict each other's behavior 1.5 days longer than they predict that of the real atmosphere. Correlations of model errors to each other quantify this similarity, with correlations exceeding the asymptotic value of 0.5 through the 14-day forecasts. Investigations of initial state error evolution by wavenumber show long waves (0–15) account for 50% more of the total uncertainty growth in 14-day research model integrations than do short waves (16–42). Results indicate current predictive skill may be impacted by model sophistication, but error pattern similarities suggest a common deficiency of models, perhaps in the initial state uncertainty.

Full access
Bryan G. White, Jan Paegle, W. James Steenburgh, John D. Horel, Robert T. Swanson, Louis K. Cook, Daryl J. Onton, and John G. Miles

Abstract

The short-term forecast accuracy of six different forecast models over the western United States is described for January, February, and March 1996. Four of the models are operational products from the National Centers for Environmental Prediction (NCEP) and the other two are research models with initial and boundary conditions obtained from NCEP models. Model resolutions vary from global wavenumber 126 (∼100 km equivalent horizontal resolution) for the Medium Range Forecast model (MRF) to about 30 km for the Meso Eta, Utah Local Area Model (Utah LAM), and Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 5 (MM5). Forecast errors are described in terms of bias error and mean square error (mse) as computed relative to (i) gridded objective analyses and (ii) rawinsonde observations. Bias error and mse fields computed relative to gridded analyses show considerable variation from model to model, with the largest errors produced by the most highly resolved models. Using this approach, it is impossible to separate real forecast errors from possibly correct, highly detailed forecast information because the forecast grids are of higher resolution than the observations used to generate the gridded analyses. Bias error and mse calculated relative to rawinsonde observations suggest that the Meso Eta, which is the most highly resolved and best developed operational model, produces the most accurate forecasts at 12 and 24 h, while the MM5 produces superior forecasts relative to the Utah LAM. At 36 h, the MRF appears to produce superior mass and wind field forecasts. Nevertheless, a preliminary validation of precipitation performance for fall 1997 suggests the more highly resolved models exhibit superior skill in predicting larger precipitation events. Although such results are valid when skill is averaged over many simulations, forecast errors at individual rawinsonde locations, averaged over subsets of the total forecast period, suggest more variability in forecast accuracy. Time series of local forecast errors show large variability from time to time and generally similar maximum error magnitudes among the different models.

Full access