• Anthes, R. A., Y. H. Kuo, D. P. Baumhefner, R. M. Errico, and T. W. Bettge, 1985: Predictability of mesoscale motions. Advances in Geophysics, Vol. 28, Academic Press, 159–202.

  • Astling, E. G., J. Paegle, E. Miller, and C. J. O’Brien, 1985: Boundary layer control of nocturnal-convection associated with a synoptic scale system. Mon. Wea. Rev.,113, 540–552.

  • Benjamin, S. G., and T. N. Carlson, 1986: Some effects of surface heating and topography on the regional storm environment. Part I: Three-dimensional simulations. Mon. Wea. Rev.,114, 307–329.

  • Betts, A. K., 1986: A new convective adjustment scheme. Part I: Observational and theoretical basis. Quart. J. Roy. Meteor. Soc.,112, 677–691.

  • ——, and M. J. Miller, 1986: A new convective adjustment scheme. Part II: Single column tests using GATE wave, BOMEX, and arctic air-mass data sets. Quart. J. Roy. Meteor. Soc.,112, 693–709.

  • Black, T. L., 1994: The new NMC Mesoscale Eta Model: Description and forecast examples. Wea. Forecasting,9, 265–278.

  • ——, D. Deaven, and G. DiMego, 1993: The step-mountain Eta coordinate model: 80 km “early” version and objective verifications. NWS Tech. Procedures Bull. 412, 31 pp. [Available from National Weather Service Office of Meteorology, 1325 East–West Highway, Silver Springs, MD 20910.].

  • Bougeault, P., 1992: Current trends and achievements of limited area modeling. Proceedings of the WMO Programme on Weather Prediction Research, PWPR Rep. Series 1, WMO/TD 479, Appendix 6, 19 pp. [Available from World Meteorological Organization, CP 2300, CH-1211, Genève 2, Switzerland.].

  • Chen, F., Z. Janic, and K. Mitchell, 1997: Impact of atmospheric-surface layer parameterizations in the new land-surface scheme of the NCEP mesoscale Eta numerical model. Bound.-Layer Meteor.,85, 391–421.

  • Chou, M.-D., 1992: A solar radiation model for use in climate studies. J. Atmos. Sci.,49, 762–772.

  • Cotton, W. R., G. Thompson, and P. W. Mielke Jr., 1994: Real-time mesoscale prediction on workstations. Bull. Amer. Meteor. Soc.,75, 349–362.

  • Davies, H. C., 1976: A lateral boundary formulation for multi-level prediction models. Quart. J. Roy. Meteor. Soc.,102, 405–418.

  • DiMego, G. J., K. E. Mitchell, R. A. Petersen, J. E. Hoke, J. P. Gerrity, J. J. Tuccillo, R. L. Wobus, and H.-M. H. Juang, 1992: Changes to NMC’s regional analysis and forecast system. Wea. Forecasting,7, 185–198.

  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci.,46, 3077–3107.

  • ——, 1993: A nonhydrostatic version of the Penn State–NCAR Mesoscale Model: Validation tests and simulation of an Atlantic cyclone and cold front. Mon. Wea. Rev.,121, 1493–1513.

  • Fels, S. B., and M. D. Schwarztkopf, 1975: The simplified exchange approximation: A new method for radiative transfer calculations. J. Atmos. Sci.,32, 1475–1488.

  • Grell, G. A., 1993: Prognostic evaluation of assumptions used by cumulus parameterizations. Mon. Wea. Rev.,121, 764–787.

  • ——, J. Dudhia, and D. R. Stauffer, 1994: A description of the fifth-generation Penn State/NCAR Mesoscale Model (MM5). NCAR Tech. Note NCAR/TN 398+STR, 138 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • Gyakum, J. R., and Coauthors, 1996: A regional model intercomparison using a case of explosive oceanic cyclogenesis. Wea. Forecasting,11, 521–543.

  • Harshvardhan, R. Davies, D. A. Randall, and T. G. Corsetti, 1987: A fast radiation parameterization for atmospheric models. J. Geophys. Res.,92, 1009–1016.

  • Hoke, J. E., N. A. Phillips, G. J. DiMego, J. J. Tuccillo, and J. G. Sela, 1989: The Regional Analysis and Forecast System of the National Meteorological Center. Wea. Forecasting,4, 323–334.

  • Horel, J. D., and C. V. Gibson, 1994: Analysis and simulation of a winter storm over Utah. Wea. Forecasting,9, 479–494.

  • Hsie, E.-Y., R. A. Anthes, and D. Keyser, 1984: Numerical simulation of frontogenesis in a moist atmosphere. J. Atmos. Sci.,41, 2581–2594.

  • Janić, Z. I., 1994: The step-mountain Eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev.,122, 927–945.

  • Kalnay, E., M. Kanamitsu, and W. E. Baker, 1990: Global numerical weather prediction at the National Meteorological Center. Bull. Amer. Meteor. Soc.,71, 1410–1428.

  • Kanamitsu, M., 1989: Description of the NMC Global Data Assimilation and Forecast System. Wea. Forecasting,4, 335–342.

  • ——, and Coauthors, 1991: Recent changes implemented into the global forecast system at NMC. Wea. Forecasting,6, 425–435.

  • Klemp, J. B., and D. R. Durran, 1983: An upper boundary condition permitting internal gravity wave radiation in numerical mesoscale models. Mon. Wea. Rev.,111, 430–444.

  • Kuo, H. L., 1965: On the formation and intensification of tropical cyclones through latent heat release by cumulus convection. J. Atmos. Sci.,22, 40–63.

  • Lacis, A. A., and J. E. Hansen, 1974: A parameterization of the absorption of solar radiation in the earth’s atmosphere. J. Atmos. Sci.,31, 118–133.

  • Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus,21, 289–307.

  • Mellor, G. L., and T. Yamada, 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys. Space Phys.,20, 851–875.

  • Mesinger, F., 1996: Improvements in quantitative precipitation forecasts with the Eta regional model at the National Centers for Environmental Prediction: The 48-km upgrade. Bull. Amer. Meteor. Soc.,77, 2637–2649.

  • ——, 1998: Comparison of the quantitative precipitation forecasts by the 48- and by the 29-km ETA Model: An update and possible implications. Preprints, 12th Conf. on Numerical Weather Prediction, Phoenix, AZ, Amer. Meteor. Soc., J22–J23.

  • Paegle, J., and D. W. McLawhorn, 1983: Numerical modeling of diurnal convergence oscillations above sloping terrain. Mon. Wea. Rev.,111, 67–85.

  • ——, and T. Vukicevic, 1987: On the predictability of low-level flow during ALPEX. Meteor. Atmos. Phys.,36, 45–60.

  • ——, R. A. Pielke, G. A. Dalu, W. Miller, J. R. Garratt, T. Vukicevic, G. Berri, and M. Nicolini, 1990: Predictability of flows over complex terrain. Atmospheric Processes over Complex Terrain, William Blumen, Ed., Amer. Meteor. Soc., 285–300.

  • ——, K. C. Mo, and J. N. Paegle, 1996: Dependence of simulated precipitation on surface evaporation during the 1993 United States summer floods. Mon. Wea. Rev.,124, 345–361.

  • ——, Q. Yang, and M. Wang, 1997: Predictability in limited area and global models. Meteor. Atmos. Phys.,63, 53–69.

  • Pan, H. L., and W. S. Wu, 1995: Implementing a mass flux convection parameterization package for the NMC medium-range forecast model. NWS Office Note 409, 43 pp. [Available from National Centers for Environmental Prediction, NOAA Science Center, Room 101, 5200 Auth Rd., Camp Springs, MD 20746.].

  • Petersen, R. A., G. J. DiMego, J. E. Hoke, K. E. Mitchell, J. P. Gerrity, R. L. Wobus, H. H. Juang, and M. J. Pecnick, 1991: Changes to NMC’s regional analysis and forecast system. Wea. Forecasting,6, 133–141.

  • Phillips, N. A., 1981: A simpler way to initiate condensation at relative humidities below 100 percent. NMC Office Note 242, 14 pp. [Available from National Meteorological Center, National Weather Service, Camp Springs, MD 20746.].

  • Roads, J. O., and T. N. Maisel, 1991: Evaluation of the National Meteorological Center’s Medium-Range Forecast Model precipitation forecasts. Wea. Forecasting,6, 123–132.

  • ——, ——, and J. Alpert, 1991: Further evaluation of the National Meteorological Center’s Medium-Range Forecast Model precipitation forecasts. Wea. Forecasting,6, 483–497.

  • Rogers, E., T. Black, D. Deaven, G. DiMego, Q. Zhao, Y. Lin, N. W. Junker, and M. Baldwin, 1995: Changes to the NMC operational Eta model analysis/forecast system. NWS Tech. Procedures Bull. 423, 51 pp. [Available from National Weather Service, Office of Meteorology, 1325 East–West Highway, Silver Springs, MD 20910.].

  • ——, ——, ——, ——, ——, M. Baldwin, and N. M. Junker, 1996:Changes to the operational “early” Eta analysis/forecast system at the National Centers for Environmental Prediction. Wea. Forecasting,11, 391–413.

  • Schwarzkopf, M. D., and S. B. Fels, 1991: The simplified exchange method revisited: An accurate, rapid method for computation of infrared cooling rates and fluxes. J. Geophys. Res.,96, 9075–9096.

  • Smith, R., and Coauthors, 1997: Local and remote effect of mountains on weather: Research needs and opportunities. Bull. Amer. Meteor. Soc.,78, 877–892.

  • Swanson, R. T., 1995: Evaluation of the mesoscale Eta Model over the western United States. M.S. thesis, Dept. of Meteorology, University of Utah, 113 pp. [Available from University of Utah, Salt Lake City, UT 84112.].

  • Troen, I., and L. Mahrt, 1986: A simple model of the atmospheric boundary layer: Sensitivity to surface evaporation. Bound.-Layer Meteor.,37, 129–148.

  • Waldron, K. M., 1994: Sensitivity of local model prediction to large scale forcing. Ph.D. dissertation, University of Utah, 150 pp. [Available from University of Utah, Salt Lake City, UT 84112.].

  • ——, J. Paegle, and J. D. Horel, 1996: Sensitivity of a spectrally filtered and nudged limited-area model to outer model options. Mon. Wea. Rev.,124, 529–547.

  • Warner, T. T., and N. L. Seaman, 1990: A real-time mesoscale numerical weather-prediction system used for research, teaching, and public service at The Pennsylvania State University. Bull. Amer. Meteor. Soc.,71, 792–805.

  • ——, R. A. Peterson, and R. E. Treadon, 1997: A tutorial on lateral boundary conditions as a basic and potentially serious limitation to regional numerical weather prediction. Bull. Amer. Meteor. Soc.,78, 2599–2617.

  • Williamson, D. L., J. T. Keihl, V. Ramanathan, R. E. Dickinson, and J. J. Hack, 1987: Description of NCAR Community Climate Model (CCM1). NCAR Tech. Note NCAR/TN-285+STR, 112 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • WMO, 1995: International Workshop on Limited-Area and Variable Resolution Models. PWPR Rep. Series 7, WMO/TD 699, 386 pp. [Available from World Meteorological Organization, CP 2300, CH-1211, Geneva 2, Switzerland.].

  • Zeng, X., and R. A. Pielke, 1993: Error-growth dynamics and predictability of surface thermally induced atmospheric flow. J. Atmos. Sci.,50, 2817–2844.

  • Zhang, D., and R. A. Anthes, 1982: A high-resolution model of the planetary boundary layer—sensitivity tests and comparisons with SESAME-79 data. J. Appl. Meteor.,21, 1594–1609.

  • Zhao, Q., and F. H. Carr, 1997: A prognostic cloud scheme for operational NWP models. Mon. Wea. Rev.,125, 1931–1953.

  • ——, T. L. Black, and M. E. Baldwin, 1997: Implementation of the cloud prediction scheme in the Eta Model at NCEP. Wea. Forecasting,12, 697–711.

  • View in gallery

    Utah LAM topography. Contours at 10, 100, and 250 m, then increments of 250 m to 3250 m. Gray shading contrasts at 250, 1000, 2000, and 3000 m.

  • View in gallery

    MM5 topography. Contours at 10, 100, and 250 m, then increments of 250 m to 3000 m. Gray shading contrasts at 250, 1000, 2000, and 3000 m.

  • View in gallery

    Gridpoint locations for (a) Eta and NGM, (b) Meso Eta, and (c) MRF models.

  • View in gallery

    Rawinsonde locations used for forecast validation.

  • View in gallery

    The 700-hPa temperature bias error analysis at 24 h [every 0.2°C; solid (dashed) lines denote positive (negative) values]: (a) ETA, (b) MRF, (c) MM5, (d) NGM, (e) Meso Eta, and (f) Utah LAM.

  • View in gallery

    The 700-hPa temperature mse analysis at 24 h (every 0.2°C): (a) ETA, (b) MRF, (c) MM5, (d) NGM, (e) Meso Eta, and (f) Utah LAM.

  • View in gallery

    Model mse’s at 0, 12, 24, and 36 h: (a) 700-hPa temperature, (b) 500-hPa geopotential height, (c) 700-hPa relative humidity, and (d) 300-hPa vector wind.

  • View in gallery

    Precipitation mse analysis for 24-h period ending at the 36-h forecast during the months of Oct, Nov, and Dec, 1997. Models verified are the MRF, Meso Eta, and MM5.

  • View in gallery

    Vertical mse profiles at Salt Lake City for the Eta (thin solid), NGM (thin short-dashed), MRF (dotted), Meso Eta (long-dashed), MM5 (thick dashed), and Utah LAM (thick solid): (a) 24-h

  • View in gallery

    (Continued) temperature, (b) 36-h temperature, (c) 24-h geopotential height, (d) 36-h geopotential height, (e) 24-h relative humidity, (f) 36-h relative humidity, (g) 24-h zonal wind, and (h) 36-h zonal wind.

  • View in gallery

    The 700-hPa temperature error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

  • View in gallery

    The 500-hPa geopotential height error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

  • View in gallery

    The 700-hPa relative humidity error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

  • View in gallery

    The 300-hPa zonal wind error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 202 202 14
PDF Downloads 79 79 10

Short-Term Forecast Validation of Six Models

View More View Less
  • 1 NOAA/Cooperative Institute for Regional Prediction, and Department of Meteorology, University of Utah, Salt Lake City, Utah
© Get Permissions
Full access

Abstract

The short-term forecast accuracy of six different forecast models over the western United States is described for January, February, and March 1996. Four of the models are operational products from the National Centers for Environmental Prediction (NCEP) and the other two are research models with initial and boundary conditions obtained from NCEP models. Model resolutions vary from global wavenumber 126 (∼100 km equivalent horizontal resolution) for the Medium Range Forecast model (MRF) to about 30 km for the Meso Eta, Utah Local Area Model (Utah LAM), and Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 5 (MM5). Forecast errors are described in terms of bias error and mean square error (mse) as computed relative to (i) gridded objective analyses and (ii) rawinsonde observations. Bias error and mse fields computed relative to gridded analyses show considerable variation from model to model, with the largest errors produced by the most highly resolved models. Using this approach, it is impossible to separate real forecast errors from possibly correct, highly detailed forecast information because the forecast grids are of higher resolution than the observations used to generate the gridded analyses. Bias error and mse calculated relative to rawinsonde observations suggest that the Meso Eta, which is the most highly resolved and best developed operational model, produces the most accurate forecasts at 12 and 24 h, while the MM5 produces superior forecasts relative to the Utah LAM. At 36 h, the MRF appears to produce superior mass and wind field forecasts. Nevertheless, a preliminary validation of precipitation performance for fall 1997 suggests the more highly resolved models exhibit superior skill in predicting larger precipitation events. Although such results are valid when skill is averaged over many simulations, forecast errors at individual rawinsonde locations, averaged over subsets of the total forecast period, suggest more variability in forecast accuracy. Time series of local forecast errors show large variability from time to time and generally similar maximum error magnitudes among the different models.

Corresponding author address: Dr. Jan Paegle, Meteorology Department, University of Utah, 135 S 1460 E RM 819, Salt Lake City, UT 84112-0110.

Email: jpaegle@icicle.met.utah.edu

Abstract

The short-term forecast accuracy of six different forecast models over the western United States is described for January, February, and March 1996. Four of the models are operational products from the National Centers for Environmental Prediction (NCEP) and the other two are research models with initial and boundary conditions obtained from NCEP models. Model resolutions vary from global wavenumber 126 (∼100 km equivalent horizontal resolution) for the Medium Range Forecast model (MRF) to about 30 km for the Meso Eta, Utah Local Area Model (Utah LAM), and Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 5 (MM5). Forecast errors are described in terms of bias error and mean square error (mse) as computed relative to (i) gridded objective analyses and (ii) rawinsonde observations. Bias error and mse fields computed relative to gridded analyses show considerable variation from model to model, with the largest errors produced by the most highly resolved models. Using this approach, it is impossible to separate real forecast errors from possibly correct, highly detailed forecast information because the forecast grids are of higher resolution than the observations used to generate the gridded analyses. Bias error and mse calculated relative to rawinsonde observations suggest that the Meso Eta, which is the most highly resolved and best developed operational model, produces the most accurate forecasts at 12 and 24 h, while the MM5 produces superior forecasts relative to the Utah LAM. At 36 h, the MRF appears to produce superior mass and wind field forecasts. Nevertheless, a preliminary validation of precipitation performance for fall 1997 suggests the more highly resolved models exhibit superior skill in predicting larger precipitation events. Although such results are valid when skill is averaged over many simulations, forecast errors at individual rawinsonde locations, averaged over subsets of the total forecast period, suggest more variability in forecast accuracy. Time series of local forecast errors show large variability from time to time and generally similar maximum error magnitudes among the different models.

Corresponding author address: Dr. Jan Paegle, Meteorology Department, University of Utah, 135 S 1460 E RM 819, Salt Lake City, UT 84112-0110.

Email: jpaegle@icicle.met.utah.edu

1. Introduction

Real-time numerical weather prediction of mesoscale features has recently shown rapid progress. In 1992, the highest horizontal resolution operational models were executed by the French Weather Service (35 km), the U.K. Meteorological Office (15 km), the Japanese Meteorological Agency (40 km), and the German Weather Service (50-km grid) (Bougeault 1992). By 1995, the German Weather Service had implemented models with a 14-km resolution, the Japanese Meteorological Agency had an operational model with a maximum resolution of 20 km, and a model run by a consortium of northern European countries featured a 4-km resolution over small subdomains such as Denmark (WMO 1995; see also Table I of Paegle et al. 1997).

In the United States, the highest resolution operational model run by the National Centers for Environmental Prediction (NCEP) was the mesoscale Eta Model (Meso Eta), which had a horizontal resolution of approximately 29 km and covered an area encompassing most of North America, the eastern Pacific Ocean, and the western Atlantic Ocean (Black 1994; Mesinger 1998).

Although the need to develop a national model has limited the resolution of the Meso Eta, this restriction has recently been partially mitigated by research centers, such as universities and government labs, which have developed real-time mesoscale modeling systems covering smaller regional domains. The first such system was developed at The Pennsylvania State University using an early version of the Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 4 over the eastern United States (Warner and Seaman 1990). This was followed by real-time mesoscale prediction efforts at Colorado State University with the Regional Atmosphere Modeling System (Cotton et al. 1994) and at the University of Utah, which featured the Utah Limited Area Model (Utah LAM; Horel and Gibson 1994). Cotton et al. (1994) and Horel and Gibson (1994) document forecast experiences gained over the western United States using these models with local grid resolutions of approximately 30 km. Such regional modeling systems were made possible by the emergence of powerful, relatively inexpensive workstations, and the availability of real-time NCEP gridded analysis and forecast products through the Internet.

Similar real-time forecasting efforts have since spread to other universities and government labs including the National Oceanic and Atmospheric Administration’s Forecast Systems Laboratory; the National Center for Atmospheric Research; the University of Washington; the University of Wisconsin; the University of California, Davis; and North Carolina State University. The initial and boundary conditions for such forecasts generally come from NCEP numerical guidance, communicated through the Internet. In some cases, results have been sent regularly to local forecast offices for evaluation and developmental feedback (Horel and Gibson 1994).

The central goal of this study is to evaluate and compare the performance of several experimental and operational numerical modeling systems. Such a forecast validation is complicated by problems related to the verification of events with point observations. For example, a mesoscale model may predict a particular feature of importance, but slight temporal or spatial phase errors could produce large verification errors when classical verification statistics are incorporated. The subjective experience of many forecasters is that numerical products available from recently emerging mesoscale model guidance provide valuable insight, but still lack the specificity to justify point-wise warnings. We begin to address these issues through a comparative analysis of the verification statistics of two different research models, the Utah LAM and the Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model Version 5 (MM5), against verification statistics from all available operational models from NCEP.

There are many examples of model performance comparisons for individual case studies (e.g., Gyakum et al. 1996) and some documentation of single model performance over an extended period (e.g., Roads and Maisel 1991; Roads et al. 1991; Swanson 1995; Mesinger 1996). There are, however, no recently documented comparative studies of the performance of several regional mesoscale models over a large number of real-data cases. One reason for this is that most operational or research forecasting centers utilize a single model and analysis scheme and do not routinely compare their regional mesoscale forecasts with those produced at other centers because of differences in the domains of responsibility. In cases where an operational center executes several models (e.g., NCEP), the tendency has been to modernize one (e.g., the Eta Model) while older models (e.g., the Nested Grid Model) remain relatively frozen. In such cases, available comparisons generally emphasize impacts of given model enhancements in individual case studies.

This study documents and examines verification statistics for operational NCEP models and two research models run over the western United States from January through March of 1996. Precipitation statistics are computed separately for October through December of 1997 using a subset of the models. The first 2 weeks of January were characterized by relatively inactive weather, followed by 3 weeks of major snow storms, then 10 days of inactive weather, followed by heavy snowstorms the last 10 days of February. In total, the January and February snowstorms produced about 6 ft of snow at Salt Lake City, Utah (SLC). March was characterized with intermittent storms. Subsequent chapters discuss forecasts produced by the Medium Range Forecast model (MRF, wavenumber 126 resolution), Nested Grid Model (NGM, 80-km grid), the Eta Model (48-km grid), the Meso Eta Model (29-km grid), a version of the MM5 model (27-km grid), and a version of the Utah LAM (30-km grid). The underlying hypothesis is that a sufficiently large number of forecasts averaged over a large enough sample of cases ought to show clear benefits for the most highly resolved and developed models, particularly in regions of complex terrain where topographic detail and other influences may provide a strongly deterministic mesoscale forecast signal.

This hypothesis may be invalid if all models suffer from some common defect (e.g., lack of a sufficiently accurate initial or ambient state), or if mesoscale prediction is in principal not possible. The latter possibility is suggested at sufficiently long lead time by classic predictability theory, which shows that the predictability of transient phenomena may not be much longer than their typical period (Lorenz 1969). In the case of mesoscale structures, whose periods are typically shorter than 1 day, it may consequently be unreasonable to expect substantial forecast improvement by a mesoscale model relative to a large-scale model much beyond about 1 day unless some fixed mesoscale forcing (e.g., topography) produces a strongly deterministic mesoscale signal. Examples of enhanced mesoscale predictability due to topography and other surface forcing are provided by Astling et al. (1985), Anthes et al. (1985), Paegle and Vukicevic (1987), Paegle et al. (1990), and Zeng and Pielke (1993). Error growth arising from boundary condition specification may also reduce the skill of limited area mesoscale models (e.g., Warner et al. 1997).

The present approach is to validate a series of real-time predictions using classic methods of validation. Such validation may be done by comparing gridded and objectively analyzed forecasts and analyses. Results using this approach suggest the unfortunate conclusion that either the most highly resolved models produce the worst skill or that the most highly resolved model analyses are poor. Verification using point observations (i.e., rawinsondes), however, conforms with expectations that the most highly resolved and developed models produce the best forecasts at sufficiently short lead times when enough cases are considered.

The paper is organized as follows. The next section describes the models used for the study. Section 3 describes the datasets and validation procedures. Section 4 presents validation of gridded forecast fields against gridded, objectively analyzed initial states. Section 5 presents validation statistics using rawinsonde observations. Section 6 describes the variability of forecast accuracy as a function of averaging period, location, and forecast time. A summary and conclusions are given in the final section. With the exception of Fig. 8, which pertains to fall 1997, all other diagrams and tables refer to the 1996 period.

2. Model descriptions

This section reviews the four operational NCEP models and two regional research models used in this paper. Basic characteristics of the NCEP operational models are also described by Mesinger (1996).

a. NCEP operational models

The Eta Model became operational in June 1993 and is described by Black et al. (1993). Major modifications were added in September 1994 (Rogers et al. 1995) and again in October 1995 (Rogers et al. 1996). During the period examined by the present study, the Eta featured a 48-km horizontal resolution with 38 vertical layers.1 Major Eta Model physical parameterizations include a modified Betts–Miller cumulus parameterization (Betts 1986; Betts and Miller 1986; Janić 1994), a gridscale cloud water/ice prediction scheme (Zhao and Carr 1997), the Mellor–Yamada (1982) level-2.5 boundary layer, the Geophysical Fluid Dynamics Laboratory (GFDL) radiation scheme with predicted cloud interaction (Lacis and Hansen 1974; Fels and Schwarztkopf 1975), a predictive cloud scheme (Zhao et al. 1997), and a four-layer soil land surface package (Chen et al. 1997). Boundary conditions for the Eta Model are provided by the NCEP MRF.

The Meso Eta Model became operational in 1995 and is described by Black (1994). During winter 1996, the Meso Eta featured similar parameterizations to the Eta Model, a 29-km grid spacing, and 50 vertical layers.2 Boundary conditions for the Meso Eta are also provided by the MRF.

The NGM model has been frozen since August 1991 and is described by Hoke et al. (1989), Petersen et al. (1991), and DiMego et al. (1992). The NGM is a sigma coordinate model with an approximate horizontal resolution on its innermost grid of 83 km and 16 vertical layers. Major physical parameterizations include a modified Kuo (1965) moist convection scheme, a so-called dump-bucket gridscale precipitation scheme where relative humidity values exceeding 95% are reduced with the excess moisture falling as precipitation into lower layers (Phillips 1981), and a radiation parameterization based on Harshvardhan et al. (1987). A description of the parameterization of boundary layer processes is presented by Hoke et al. (1989). Lateral boundary conditions for the NGM’s outermost grid are required only along the equator and are specified using mathematical assumptions concerning the behavior of the atmosphere in the Tropics (Hoke et al. 1989).

The MRF global spectral model, described by Kanamitsu (1989), Kalnay et al. (1990), and Kanamitsu et al. (1991), is configured with a triangular truncation of T126 and 28 vertical sigma layers during the period examined.3 This is the only model examined in this paper that does not require lateral boundary conditions. The MRF also includes the simplified Arakawa–Schubert cumulus parameterization (Pan and Wu 1995); a dump-bucket stable precipitation scheme; the Troen and Mahrt (1986) planetary boundary parameterization; longwave radiation is founded on the GFDL scheme with modifications with thick clouds (Schwartzkopf and Fels 1991); a shortwave radiation scheme based on the work of Chou (1992), with updated surface albedo formulations; and clouds are calculated diagnostically from humidity and convective precipitation rate.

b. Utah LAM

The Utah LAM was originally developed as a two-dimensional model (Paegle and McLawhorn 1983) and is currently a three-dimensional, hydrostatic, limited area model. Recent improvements to the Utah LAM are described by Waldron (1994), Paegle et al. (1996), and Waldron et al. (1996).

The Utah LAM is founded upon a hydrostatic anelastic system using a terrain-following, height-based vertical coordinate. Model equations are solved on an unstaggered, horizontal grid by finite difference approximation with a Fourier filter used at each time step to eliminate computational modes that develop when using an unstaggered grid (Waldron et al. 1996). Second-order horizontal difference approximations are used for pressure gradient and diffusion terms. Fourth-order approximations are used for horizontal advection. Model equations in the vertical are solved by finite element method on a terrain-following coordinate.

The real-time version of the Utah LAM covers the western United States and features a 30-km horizontal resolution with 20 vertical levels (Fig. 1). Solar radiation is computed at every grid point each time step. Longwave radiation is calculated every hour from upward and downward radiative fluxes using emissivity methods. Cloud radiative interactions are retained and cloud fraction is a linear function of humidity, starting with no coverage at 80% and full coverage at saturation. The precipitation schemes are based upon the National Center for Atmospheric Research Community Climate Model (CCM1; Williamson et al. 1987).

c. MM5

The MM5 is a nonhydrostatic, primitive equation model with a terrain-following sigma coordinate. Detailed descriptions are given by Dudhia (1993) and Grell et al. (1994). Many parameters can be changed or modified in the MM5, which makes it a highly useful research model.

The real-time version of the MM5 that was used for most of the study features a single domain with a 27-km horizontal resolution and 65 × 65 grid points. A higher-resolution version is discussed in section 5b. The geographic coverage and terrain representation of the modeling system is presented in Fig. 2. Twenty-seven half-sigma levels are used in the vertical. Second-order finite differences are used except for a first-order upstream scheme for the precipitation fall term. Diurnally varying longwave and shortwave radiation fluxes are calculated using atmospheric column-integrated water vapor and cloud fraction estimated from relative humidity (Benjamin and Carlson 1986). The atmospheric cooling rate depends on temperature with no cloud interaction or diurnal cycle. The Klemp and Durran (1983) upper boundary condition allows vertically propagating gravity waves to pass through the top boundary with minimal reflection. Precipitation processes are parameterized using the Grell (1993) cumulus parameterization and the explicit moisture scheme of Hsie et al. (1984), with improvements to allow for simple ice-phase processes below 0°C (Dudhia 1989). Other major parameterizations include a multilayer planetary boundary layer scheme following Zhang and Anthes (1982).

d. Initial and boundary conditions for the Utah LAM and MM5

Initial and boundary conditions for the Utah LAM and MM5 are provided from NCEP model analysis and forecast grids. In the horizontal plane, data are bilinearly interpolated to the Utah LAM or MM5 grid points. The Utah LAM requires the data to be spaced evenly in the vertical and converted to the Utah LAM’s vertical coordinate by means of cubic splines. This is easily accomplished with the Eta or NGM; however, due to the uneven spacing of the MRF grid, MRF data are first vertically interpolated to 50-hPa spacing. For mixing ratio above 300 hPa, the Utah LAM interpolates between the 300-hPa value and an assumed value of zero at 150 hPa. The initial analysis of moisture above 300 hPa for the MM5 assumes a relative humidity with respect to water of 10%.

Lateral and top boundary conditions are computed each time step, with linear time interpolation from the outer model’s forecasts. The outer model’s forecasts are available every 6 h in the case of the Eta and NGM models and every 12 h for the MRF model. All MM5 and Utah LAM model experiments are run with Eta initial and lateral boundary conditions unless specifically noted.

The Utah LAM uses two strategies to acquire its boundary conditions. The first method is a one-way interacting boundary condition plus Davies (1976) nudging. This method is implemented by nudging the five grid points adjacent to the boundary toward outer model values. The nudging decreases with distance from the boundary to implement a smooth transition. The advantages to Davies nudging are that it is easy to implement and provides the Utah LAM with crude knowledge of features near its boundary. However, some large-scale features may be better predicted by the larger-scale models. This leads to method two, spectral relaxation with Davies nudging (Waldron et al. 1996). In this method, internal wavenumbers 0–2 are relaxed toward the values of the outer model, which may be better represented by the outer model. Only the temperature forecast above about 2 km is spectrally nudged. Winds are not nudged to allow LAM-consistent mass-flow adjustment. Higher wavenumbers are allowed to evolve from the LAM forecast. Davies nudging is still included for the outer five grid points.

The MM5 also uses a five-gridpoint nudging term to apply its lateral boundary conditions. The MM5 nudging scheme involves a diffusion term used in Davies nudging; however, it also contains a Newtonian term. Vertical velocity is not nudged and is equal to zero at the boundary and varies freely elsewhere.

3. Data and methods

a. Data resources

Output from all models was archived in GEMPAK grid format and in pressure coordinates, with operationally available forecast grids from the NCEP models downloaded via the Internet. Output from the Eta and NGM models was available on the NGM super-C grid, which has a horizontal resolution of 80 km and a vertical resolution of 50 hPa (Fig. 3a). This is the native horizontal grid for the NGM; however, this is a degraded grid for the Eta Model. The Meso Eta Model was available on a Lambert conformal Automated Weather Information Processing System grid with a 40-km horizontal resolution, 25-hPa resolution from 1000 to 600 hPa, and 50-hPa resolution above 600 hPa (Fig. 3b). MRF forecasts were provided on a polar stereographic grid with a horizontal grid spacing at 60°N of 381 km and on mandatory pressure levels located at 1000, 850, 700, 500, 400, 300, 250, 200, 150, and 100 hPa (Fig. 3c). The Utah LAM and MM5, which were run at the University of Utah, were stored in GEMPAK format on their native horizontal grids, with output from their vertical coordinate systems interpolated to pressure coordinates every 50 hPa.

b. Verification scores

Two verification skill scores are used in this paper: bias error and mean square error (mse). Both scores were calculated for temperature, dewpoint, height, relative humidity, and wind.

Bias error measures the inclination of a model to overforecast or underforecast a value. Thus, if a model has a positive bias error, on average the model overforecasts or exceeds the observed value. The bias error, B, for a given variable, x, is defined as
i1520-0434-14-1-84-e3-1
where N is the total number of forecasts and the superscripts f and o signify forecast and observed values, respectively. Mse measures the typical size of model forecast errors, tends to give more weight to large errors, and is defined as
i1520-0434-14-1-84-e3-2
A slightly different approach is used for wind speed vectors. Bias error is computed using
i1520-0434-14-1-84-e3-3
while mse is calculated from
i1520-0434-14-1-84-e3-5
where u and υ are the zonal and meridional components of the wind, respectively. The bias error is computed so oppositely signed u and υ biases will not cancel each other.

c. Verification procedure

January, February, and March 1996 model forecasts, initialized at 0000 UTC and including only those times for which output from all models was available, were evaluated using gridded analyses and rawinsonde observations. Performance relative to gridded analyses involved calculating each model’s bias error and mse relative to that respective model’s gridded analysis. This was done over the region encompassing the Utah LAM domain (Fig. 1), except for the MM5, which was evaluated over its own domain area (Fig. 2). Bias error and mse were calculated at every model gridpoint location on the 700-, 500-, and 300-hPa surfaces.

Such a validation using gridded analyses provides information concerning the regional distribution of bias errors and mse’s. Although this may seem to be an ideal approach with which to verify a model forecast, there are significant drawbacks, which are illustrated in the next section and include the following.

  1. The observational data used to generate gridded analyses are often of significantly lower resolution than forecast output from highly resolved models. Thus, features predicted by such models are often smaller in scale than can be resolved in a gridded analysis. This artificial error source is mitigated to some degree by the use of short-term forecasts as first-guess analysis fields.
  2. Data assimilation systems are not perfect and errors associated with their analyses contribute to the apparent error in a model forecast.
  3. Because initial conditions for the Utah LAM and MM5 are provided by interpolation of the 80-km Eta initial analyses, forecasts from these models include significantly more detail than their initial analyses.

As a second measure of forecast skill, bias error and mse were also determined relative to rawinsonde observations over the western United States (Fig. 4).4 This rawinsonde-based validation involved horizontally and vertically interpolating from each model’s grid to the upper-air station location and mandatory pressure levels located at 850, 700, 500, 400, and 300 hPa.

4. Gridded forecast error fields

a. Bias error

Twenty-four-hour 700-hPa temperature forecast bias errors are presented in Fig. 5. The smallest peak bias errors are produced by the MRF, which is available on a grid that features significantly lower horizontal resolution than that of other models. Peak temperature bias errors from the most highly resolved models (Utah LAM, MM5, and Meso Eta) approach or exceed 2°C, significantly larger than the peak bias errors of less than 0.5°C (1°C) in the MRF (NGM). The most highly resolved model bias-error analyses appear to reflect the influence of topographic features, such as the elevated regions of central Colorado, central Nevada, and the Wasatch Mountains of Utah. Each of these regions has positive (or relatively reduced negative) forecast bias errors. It is unclear whether this reflects actual warming above these areas, which is not resolved by the observing system, or systematic forecast biases that are accentuated by the higher-resolution models over high terrain. All models, except the MRF, display negative temperature bias errors over the central intermountain region.

Temperature bias errors do not grow substantially from 12 to 36 h in any of the forecast models (not shown). Furthermore, bias error analyses of other model fields (wind, relative humidity, and geopotential height) exhibit patterns similar to that of temperature with bias error analyses from the more highly resolved models reflecting the influence of topographic features. This is consistent with the hypothesis that the bias errors are produced primarily by model spinup from an imbalanced initial state. This spinup includes mesoscale adjustment to local topographic features, resulting in significant bias errors when the forecast is compared to a gridded analysis based on lower-resolution observational data. The bias errors may also reflect smoothing inherent in the analysis fields used for verification.

b. Mse

Figure 6 displays mse analyses for 24-h 700-hPa temperature forecasts. Peak mse’s are ∼3°C for the Utah LAM, MM5, and Meso Eta, and slightly more than 1°C for the MRF model. Regions of particularly large mse’s for the more highly resolved Utah LAM, MM5, and Meso Eta are located over Wyoming, Colorado, and Arizona. Note that peak mse’s are approximately twice as large as the peak bias errors (cf. Figs. 5, 6).

Because mse includes both bias error and the variable portion of the error field, the fact that mse is typically substantially larger than the bias error suggests that systematic model biases may not limit model skill as much as deficiencies that are common to all models. One such deficiency is initial condition uncertainty, particularly since the western United States lies downstream of the relatively data-sparse Pacific Ocean. Presumably, this error source should affect all models similarly, although more sophisticated data assimilation systems may produce somewhat improved initial analyses.

The relatively superior forecasts produced by the MRF model are due partly to a lack of forecast and validation detail in the coarser fields used for this model’s forecasts and verification. The MRF model consequently has “less opportunity for failure” than other, more highly resolved models. This is a common problem for validation of mesoscale model performance. The next section presents verification statistics performed only at observation sites, which partly alleviates this difficulty.

5. Rawinsonde validation

Difficulties associated with the validation of model forecasts with gridded analyses are not as serious if the validation is done using point observations, such as those from rawinsondes, provided a sufficiently large number of observations are used. In this section, we demonstrate that 12- and 24-h mse’s are, on average, smallest for the Meso Eta. This is consistent with the relative advantages the Meso Eta Model possesses, particularly regarding horizontal and vertical resolution, a later cutoff time for initial data ingestion, special numerical treatment of orography, and the fact that data assimilation for initial conditions is performed on the Meso Eta grid.

a. Bias error

Table 1 displays bias errors of temperature, geopotential height, relative humidity, and vector wind at 300, 500, and 700 hPa for the initial analysis, 12-, 24-, and 36-h forecasts. The lowest (highest) bias error in each forecast category is identified with bold (italic) font. The overall category summarizes the number of times (out of a possible 12) that each model scored the lowest bias error. An idealized “best” model would have achieved the lowest bias error 12 times.

Bias errors in the initial analyses show little variation between the models (Table 1a). The Meso Eta and MRF models produce lowest bias errors in three categories, whereas the NGM has lowest bias errors in two categories. The other models have one or less lowest bias errors. Initial bias errors exist because model analyses do not provide an exact fit to observations and because of interpolation from the model grid to the sounding location. The fact that the NCEP operational models exhibit the smallest initial bias errors suggests that the initial analyses for the Utah LAM and MM5, which are produced by interpolation of Eta Model analyses, may be slightly inferior to those of the NCEP operational analyses.

Table 1a illustrates that initial bias errors for temperature are typically 0.5°C or less, geopotential height bias errors are typically 10 m or less, relative humidity bias errors are of the order of a few percent, and wind vector bias errors have magnitudes less than 1 m s−1. A few relatively high values (e.g., lower-tropospheric temperature bias errors exceeding 1°C in the Meso Eta, upper-tropospheric humidity bias errors exceeding 10% in the Meso Eta, geopotential height bias errors exceeding 10 m in the MM5) do not appear to propagate into the forecast since the 12-h forecasts exhibit smaller values (Table 1b).

At 12 h, the Eta and the MRF each have the smallest bias error in two categories and the Meso Eta has the smallest bias error in four categories (Table 1b). As in the case of the gridded analyses, the bias errors have variable sign from one model to the next. For example, the operational NCEP models and the MM5 exhibit negative middle- and lower-tropospheric bias errors of about 0.5°C or less, whereas the Utah LAM shows slightly larger positive bias errors at these levels. All models display a positive upper-tropospheric temperature bias, peaking at about 1°C for the Utah LAM.

Geopotential height bias errors at 12 h have variable sign and show slight growth from the initial time. The largest height bias errors (∼10 m) and temperature bias errors (∼1°C) are produced by the Utah LAM. This may reflect the impact of the anelastic approximation, which neglects variations in density due to variable weather in both the hydrostatic and continuity equations. These bias errors do not grow with time (Tables 1c and 1d) and other models have similar bias values by 36 h.

All of the operational models have positive relative humidity bias errors at 12 h, whereas the research models have negative humidity bias errors at most levels. Wind vector bias errors are typically ∼1–2 m s−1 at 12 h.

The lower- and middle-tropospheric cold bias exhibited by the operational NCEP and MM5 models persists at 24 h (Table 1c) and appears to diminish at 36 h in some of these models (Table 1d). The NGM displays some amplification in geopotential height bias error from 12 to 36 h, whereas the other models show little systematic bias change with time.

An important question concerns the statistical significance of these bias errors. To measure the statistical significance of the difference between models, a Student’s t-test is used. The null hypothesis is that any number of models, run over an infinite number of cases, would have the same score. To reject this hypothesis at a 95% confidence level means that there is a 95% probability that the difference in the bias score is real and not a result of chance.

For temperature bias error at all levels and forecast hours, a difference of approximately 0.1°C is at the 95% confidence level. For example, at the initial time (Table 1a) at 700 hPa, Eta Model bias error and Meso Eta bias error are statistically significant at the 95% confidence level because the difference of their error is greater than 0.1°C (0.7°C). However at 500 hPa, their difference is not statistically different because their error is less than 0.1°C (0.08°C). For other categories and times, the 95% confidence level is equal to or greater than 0.6 m at the initial time and 2.6 m at 36 h for geopotential height, 1.1% at the initial time and 3% at 36-h for relative humidity, and 0.55 m s−1 at the initial time and 1.4 m s−1 at 36 h for wind.

b. Mse

Table 2 displays temperature, geopotential height, relative humidity, and wind mse’s at 300, 500, and 700 hPa for the initialization time, 12-, 24-, and 36-h forecasts. The Meso Eta and Eta initial analyses have the smallest mse’s in three categories, while the MM5, MRF, and NGM each have best fits in two categories (Table 2a). Initial analyses are typically within ∼1°C for temperature, 15 m or less for geopotential height, 15% or less for relative humidity, and 5 m s−1 or less for wind. Temperature and geopotential height mse’s amplify to 2°–2.5°C and 30–45 m by 36 h, respectively (Table 2d). Relative humidity mse’s approach 20%–30% by 36 h and upper-tropospheric wind errors exceed 10 m s−1.

Although the Utah LAM produces the largest bias errors of temperature and geopotential height (Table 1), due possibly to its use of the anelastic approximation, the mse’s do not show a similar trend. In particular, at 24 h, the Utah LAM temperature and geopotential height mse’s are within 10% of those for the MM5, which is the only nonhydrostatic model of the tested group and the most general model with respect to dynamical processes (Table 2c). The wind and relative humidity mse’s for the MM5 are systematically smaller than those of the Utah LAM at 24 h (Table 2c). Similar superiority of the MM5 wind and moisture forecasts relative to the Utah LAM is found at 12 h (Table 2b) and at 36 h (Table 2d).

The overall categories at the bottom of Table 2 summarize the number of times that each model scored the lowest mse. As previously mentioned, the Meso Eta and Eta have the best initial analysis in three categories (Table 2a). The Meso Eta produces the best forecast at 12 h in seven categories, the MRF three, and the MM5 and Eta in one category each (Table 2b). At 24 h, the Meso Eta is best in seven categories, followed by the MRF in four categories and the MM5 in one category (Table 2c).

Through 24 h, the results are generally consistent with expectations. The MM5 appears to be the best of the research models. This is not surprising in view of its superior dynamical treatment relative to the Utah LAM and the fact that the MM5 uses 27 levels, while the Utah LAM only has 20. The best operational model at 12 and 24 h appears to be the Meso Eta, which has the best resolution and retains a specialized treatment of topography, which might offer special benefits over the western United States.

It is useful to note that when a sufficiently large sample of validation points over a sufficient number of forecasts are used, the most highly developed operational (Meso Eta) and research (MM5) models produce the best forecasts at short-range timescales (12 and 24 h). This suggests one possible solution to the problem of mesoscale model validation is to use a large number of observation points and a large number of model forecasts.

By 36 h, the MRF model becomes the most accurate model in 7 out of 12 categories, while the Meso Eta is most accurate in four and the MM5 in one (Table 2d). These results suggest that the MRF may be the superior model by 36 h or is beginning to perform better relative to the other models. One interpretation of this finding is that the weather at 36 h over the western United States is dominated by large-scale influences, initially situated over the relative data void of the Pacific Ocean. By 36 h, initial condition uncertainty, which is common to all models, dominates the forecasts to the point that models that have superior local resolution and detailed topographic resolution have no obvious advantage relative to operational global models executed at about wavenumber 100 resolution. The superiority of the MRF at this time may also be due to the lack of lateral boundary discontinuities that may degrade the solution of regional models. For example, the Eta and Meso Eta Models obtain lateral boundary conditions from 12-h- and 15-h-old MRF forecasts and error growth at the lateral boundaries could be limiting model skill. Such problems related to limited area models have been recently summarized by Warner et al. (1997). If this hypothesis is correct, it may be expected that at longer integration times, the accuracy of the limited area models would increase with distance from their lateral boundaries. Some support for this hypothesis is provided by Table 3, which displays the 36-h mse’s at SLC where the Meso Eta and MRF show approximately equal skill.

Bar graphs illustrate that all model mse’s are generally of the same order of magnitude and that the difference in mse between the models is significantly smaller than the mse itself (Fig. 7). This suggests that errors produced by these models may be due to a common problem, either in design or the initial analysis of the atmosphere.

Although the MRF and Meso Eta show approximately equal skill at 36 h relative to rawinsonde observations, one might expect the Meso Eta to exhibit superior skill in the forecasting of mesoscale variables such as precipitation. To examine this hypothesis, 24-h accumulated precipitation forecasts (forecast hours 12–36) from the MRF, Meso Eta, and an 18-km version of the MM5 were evaluated for fall (October, November, and December) 1997 using 24-h accumulated precipitation observations from 787 sites in the intermountain west (obtained from the National Oceanic and Atmospheric Administration’s Climate Prediction Center precipitation dataset). Figure 8 shows that for light precipitation categories, the coarsely resolved MRF produces the smallest errors while the highly resolved MM5 produces the largest. The MM5 and Meso Eta, however, do produce clearly superior forecasts at higher precipitation categories. This superior mesoscale model skill in heavier precipitation partly reflects the inability of the coarse-resolution MRF to resolve heavy rainfall events. Nevertheless, the higher-resolution models appear to provide useful skill in predicting heavy precipitation events out to at least 36 h.

c. Sensitivity to initial and boundary conditions

Previous sections have included an evaluation of a real-time version of the Utah LAM, which obtained initial, lateral, and upper boundary conditions from the Eta Model. In this section, we compare the performance of these Utah LAM forecasts with additional simulations run with NGM and MRF initial, lateral, and upper boundary conditions to examine how such configurations impact model skill. The Utah LAM was also run using spectral relaxation with Davies nudging from Eta and MRF boundary conditions. Results from all five Utah LAM configurations at 24 and 36 h are presented in Table 4.

At 24 h, the results are generally consistent with how the model providing initial, lateral, and upper boundary conditions fares. Experiments with Eta Model conditions produce the most accurate forecast in 9 of 12 categories, while the MRF conditions are most accurate in 3 categories. The NGM conditions produce the least accurate results in 11 categories, clearly indicating that locally run limited area models driven by this model produce inferior results.

At 36 h, both the Eta and MRF model boundary conditions produce the most accuracy in six categories. In the previous section, the MRF model was found to be the most accurate model at 36 h, but the Eta boundary conditions to 36 h are equally useful. This may be due to the fact that MRF boundary conditions are interpolated every 12 h, whereas the Eta Model is interpolated every 6 h.

Use of interior spectral nudging results in the most accurate forecast in 8 of the 12 categories at 24 h and 7 categories at 36 h. Although spectral nudging shows the most accurate forecasts, the improved accuracy is only slight. The biggest improvements in temperature are generally less than 0.1°C at all levels and in wind are generally less than 1 m s−1. Results are mixed for relative humidity and geopotential height.

6. Forecast variability

The previous section shows that when sufficient averaging is done over many stations and many forecasts, the results are consistent with expectations, with the most highly developed and resolved operational research models producing the best results in their respective categories at 12 and 24 h. To better understand the vertical and temporal distribution of model errors, we examine skill scores at the Salt Lake City rawinsonde site during February 1996. As was done previously, these skill scores were evaluated at 700, 500, 400, and 300 hPa.

a. Salt Lake City mse profiles

Vertical profiles of 24-h (left) and 36-h (right) temperature, geopotential height, relative humidity, and wind mse’s at Salt Lake City are displayed in Fig. 9. At 24 h, four different models display the best temperature skill at the four different levels that are plotted, including the MM5 at 500 hPa and the Utah LAM at 400 hPa (Fig. 9a). At 36 h, the Meso Eta Model produces the best forecast at 700 and 500 hPa, and there is little contrast in model performance above these levels (Fig. 9b). The NGM exhibits the largest 36-h temperature forecast errors at all levels.

The MM5 produces the best average upper-tropospheric height forecast and the MRF produces the best results at and below 500 hPa at 24 h (Fig. 9c). The MRF is clearly the superior model at all levels by 36 h (Fig 9d). The relative humidity forecast errors at Salt Lake City show great variability, and the only generally consistent result is the inferior NGM forecast at 24 and 36 h (Figs. 9e,f). There is also significant variability in the quality of zonal wind forecasts, although the Eta and Meso Eta generally have the lowest mse’s at 24 h, the MRF generally performs best at 36 h, and the NGM is clearly inferior at most levels (Figs. 9g,h). In summary, Salt Lake City verification statistics for 24 and 36 h for February show the 36-h geopotential height field prediction by the MRF to be the only clearly superior forecast. The NGM appears to have the worst statistics for 24-h height as well as 24-h relative humidity and 24- and 36-h wind forecasts.

Among the research models, the MM5 is clearly superior to the Utah LAM for the 24-h height forecast. The MM5 and Utah LAM have similar verification statistics for temperature, relative humidity, and wind prediction at both 24 and 36 h and for height at 36 h. These models are generally competitive with, or only slightly inferior to, the operational NCEP models for most forecast variables at 24 and 36 h.

The results suggest that it is possible to produce generally competitive 1-day station forecasts within the central regions of research models spanning ∼4 × 106 km2 even when smoothed operational model analyses and forecast guidance at 6-h intervals are used for initial and boundary conditions. It is difficult to estimate how much the remaining forecast gap between the best operational and currently tested research models may be reduced if more detailed ambient states and higher temporal resolution boundary conditions were used by the research models.

b. Time series

Times series of forecast errors relative to Salt Lake City rawinsonde observations at 0, 12, 24, and 36 h are presented in Figs. 10–13 (note gaps in forecast availability). The time series of 700-hPa temperature error shows that this variable is usually within 2°C of observed, although the NGM and MRF occasionally exhibit larger discrepancies (Fig. 10). All models have magnified errors that peak at several degrees late in the first week of the month. This is followed during a more predictable period, which features generally small errors until about 17 February. Maximum forecast errors, occasionally exceeding 5°C in magnitude at 36 h, arise during the week of 17–24 February, with the more highly resolved Meso Eta and MM5 tending to have smaller errors during this period.

Figure 11 depicts time series of 500-hPa geopotential height errors at Salt Lake City. Interestingly, all models except the NGM display positive 24- and 36-h errors on 6 February and around 20 and 27 February. This may be due to the fact that the NGM is the only model that uses a different global-scale outer model prediction, while all the other models depend on global-scale guidance from the MRF. All 36-h forecasts from 24 February produce negative height errors that approach or exceed −50 m. This suggests common deficiencies in specification of the initial state. Similar to the temperature fields, there are periods of relatively large errors early and late in the month, with relatively less error during the second week in all models. These results are consistent with the expected regime dependence of atmospheric predictability.

Time series of 700-hPa relative humidity errors at Salt Lake City are displayed in Fig. 12. All models have maximum 24- and 36-h forecast errors that approach or exceed 50%, demonstrating the difficulties in predicting the moisture field, which typically is characterized by large mesoscale gradients. There is no evidence of significant alleviation of this problem with finer resolution, as the mesoscale models experience peak errors that are similar to those found in the cruder-resolution MRF. Although there are occasionally similar errors of moisture among these models, there is also much variability consistent with the differing parameterizations of moist processes.

Figure 13 shows time series of 300-hPa zonal wind forecast errors. There again is more commonality in errors at certain times. Each model has negative 24- and/or 36-h forecast errors of 10 m s−1 or more around 20 February. A 10 m s−1 wind error acting over 12 h corresponds to 400-km advection error, which would produce zero predictive skill (quarter-wavelength phase error) in a 1600-km wave in about half a day. Given the large advection problems implied by these wind errors, the low predictive skill of all models for moisture (Fig. 12), which is often dominated by mesoscale features, is not surprising. Improving moisture forecast errors will likely require better initialization of the mesoscale structure of the moisture field as well as better prediction of the ambient circulation.

7. Conclusions

The short-range (0–36 h) performance of four operational and two research models has been evaluated for three winter months of 1996. Bias errors and mse’s were evaluated using gridded analyses and rawinsondes. Results using gridded analyses show that the bias error varies considerably from model to model. One common feature of higher-resolution model bias errors and mse’s is that they exhibit more detail and amplitude, which may reflect the fact that observational data used to generate gridded analyses is often of significantly lower resolution than the forecast output. Thus, some of the apparent errors reflect forecast information that appears on spatial scales that are too small to be verified by the observing system used for validation.

Model bias errors vary both in magnitude and sign among the different models. This implies that different model techniques produce systematically different forecast behavior. However, systematic biases do occur, such as a cold bias over Wyoming and Montana for 24-h temperature at 700 hPa (Fig. 5). The bias errors are almost fully developed by 12 h, suggesting that they largely reflect model adjustment to regional topographic forcing and that this adjustment occurs relatively quickly.

The mse’s of each model are typically at least twice as large as the model biases. This implies that most of the variance of the forecast error is produced by the time varying portion of the error field and that simple statistical correction of forecast errors using bias information would not be particularly useful for any of the tested models. This conclusion may change for longer integrations.

Forecast errors at 12 and 24 h, calculated using rawinsonde observations, show the superior performance of the Meso Eta, which is the most highly resolved and developed operational model. They also show that the MM5, which is the most developed research model in this study, produces superior forecasts relative to the Utah LAM in most categories. These results suggest that even if mesoscale data are not available for validation, the use of a large number of widely spaced observations over a large sample may overcome some of the difficulties of validating high-resolution numerical weather prediction models. Without detailed observations, however, mesoscale structure is still not being validated.

Similar validation at 36 h shows that the MRF provides the best forecast in most validation categories, even though this model provides the crudest resolution of the initial state, of the topography, and of the dynamical processes. One possible interpretation is that the deterministic skill added by a mesoscale model is exhausted beyond the point at which the inherent predictability limitations of the transient portion of mesoscale pattern opposes further gains. The present results indicate that this occurs over the western United States between 24 and 36 h. Under this interpretation, the value added from mesoscale information in high-resolution models is lost due to phase and amplitude errors in the individual mesoscale structures.

This interpretation is supported by the observation that all of the forecast models exhibit mse’s in the upper-tropospheric wind that exceed 10 m s−1 by 36 h (Table 1d). This would produce 12-h advection errors that exceed 400 km. Mesoscale components of the total spectrum possessing wavelengths less than 1600 km then experience upper-tropospheric phase errors on the order of quarter-wavelength (or more) after only 12 h, producing zero or negative skill in a similar time interval. The relatively superior mesoscale model forecast skill in heavier precipitation categories at 36 h (section 5b) for fall 1997 partly reflects the inability of the coarse-resolution MRF to resolve heavy rainfall events at points as well as the inclusion of a 12–36-h (rather than just 36 h) verification period. The result also indicates useful precipitation skill in mesoscale models to 36 h.

The period of useful regional mesoscale prediction could presumably be extended by providing more accurate initial mesoscale fields and by providing improved forecasts of the ambient circulation that modulates and advects smaller scales. It is plausible that the MRF, which is the only spectral and global model of the tested group, provides relatively superior treatment of the larger-scale flow components. This may be due to the accuracy of spectral methods relative to gridpoint methods for truncation error, and to the fact that a global model does not require lateral boundary interfaces and resulting numerical transition zones as do the more highly resolved regional models.

The present results are consistent with the inference that prediction of relatively smaller-scale circulations over the intermountain west may be enhanced at 12 and 24 h by improved local specification of the initial state. Whether such regional observational enhancement would be locally useful at 36 h and beyond is more doubtful, and the relative role of improved observations on the regional versus remote global-scale initial state is an important area for continued investigation.

These conclusions may need modification in other areas. The eastern portion of the United States is situated farther from the relative data void of the Pacific Ocean, and here the period of useful ambient state and mesoscale predictability may consequently be lengthened. The degree of fixed local orographic forcing of a strongly deterministic forecast signal is, however, considerably less in this region than it is over the western United States, and this may diminish the mesoscale predictability east of the Rockies. It would be interesting to repeat the present study over a subdomain centered east of the Rocky Mountains to determine which effect dominates.

The present approach has only emphasized objective validation statistics using relatively coarse observational data. As a result, the utility of forecast guidance by mesoscale regional models is probably underestimated. Regional mesoscale models show considerable skill in predicting a number of local, orographically driven circulations, including gap winds, downslope windstorms, lee waves, and topographic organization of precipitation (Smith et al. 1997). Many of these features are simply not resolved by larger-scale models such as the MRF.

The fact that the MRF is more skillful at 36 h than the higher-resolution regional models, according to presently used measures, partly reflects the fact that two of the forecast parameters tested (temperature and geopotential height) describe mainly the larger synoptic scales, one (wind) varies mainly on medium to smaller synoptic scale, and only relative humidity and precipitation typically have strong mesoscale variability. It would be instructive to reexamine present conclusions using validation procedures that emphasize the relatively smaller-scale signal that is present in the wind and moisture fields. The recent spread of the Next Generation Radar radar network into the western United States or the wind profiler network over the midwest United States may provide one means to do this.

The present experiment design makes it difficult to objectively intercompare the skill of the research and operational models. The operational models benefit from initial states that are internally compatible with the forecast model, which is generally used to provide internally consistent estimates of the first-guess initial state. A similar type of internally consistent initialization is not available for the research models that rely on interpolated NCEP gridded fields for their initial state. The research models also experience relative disadvantages in the availability of boundary guidance at only 6-h intervals, and the fact that their forecast domains are more limited than are the domains of the operational models.

The research models have certain advantages in the experiment design. Their forecasts are available on the full model resolution, unlike the operational models for which only interpolations to a smoothed grid were available. The research models also benefit from retrospective predictions and more emphasis upon prediction over the western United States. The Utah LAM used several different methods to implement outer model guidance with the method of spectral nudging.

These relative incompatibilities in experiment design are the reasons that much of the summary discussion emphasizes comparisons of operational models and research models separately. It is noteworthy that despite the substantial differences in experiment design, all of the regional, higher-resolution models have similar-order mse’s. Clearly identified winners in the present intercomparisons are evident only after a large amount of averaging of many forecasts. This result is consistent with the likelihood that all models suffer from common fundamental difficulties and that a more accurate specification of the initial state on all spatial scales is a first-order requirement for more accurate prediction of the mesoscale.

Acknowledgments

This work was conducted with support provided to the University of Utah by National Science Foundation Grants ATM-9423311, ATM-9626380, ATM-9634191, and ATM-9714291, and NOAA Grant NA67WA0465. We are grateful to the National Centers for Environmental Prediction for the distribution of gridded model output and the University of Utah Center for High Performance Computing for providing a portion of the computer resources used for this study. Use of the MM5 was made possible by the Microscale and Mesoscale Meteorology Division of the National Center for Atmospheric Research, which is supported by the National Science Foundation. Suggestions made by Gary Carter and a careful reviewer helped to improve the quality of the paper.

REFERENCES

  • Anthes, R. A., Y. H. Kuo, D. P. Baumhefner, R. M. Errico, and T. W. Bettge, 1985: Predictability of mesoscale motions. Advances in Geophysics, Vol. 28, Academic Press, 159–202.

  • Astling, E. G., J. Paegle, E. Miller, and C. J. O’Brien, 1985: Boundary layer control of nocturnal-convection associated with a synoptic scale system. Mon. Wea. Rev.,113, 540–552.

  • Benjamin, S. G., and T. N. Carlson, 1986: Some effects of surface heating and topography on the regional storm environment. Part I: Three-dimensional simulations. Mon. Wea. Rev.,114, 307–329.

  • Betts, A. K., 1986: A new convective adjustment scheme. Part I: Observational and theoretical basis. Quart. J. Roy. Meteor. Soc.,112, 677–691.

  • ——, and M. J. Miller, 1986: A new convective adjustment scheme. Part II: Single column tests using GATE wave, BOMEX, and arctic air-mass data sets. Quart. J. Roy. Meteor. Soc.,112, 693–709.

  • Black, T. L., 1994: The new NMC Mesoscale Eta Model: Description and forecast examples. Wea. Forecasting,9, 265–278.

  • ——, D. Deaven, and G. DiMego, 1993: The step-mountain Eta coordinate model: 80 km “early” version and objective verifications. NWS Tech. Procedures Bull. 412, 31 pp. [Available from National Weather Service Office of Meteorology, 1325 East–West Highway, Silver Springs, MD 20910.].

  • Bougeault, P., 1992: Current trends and achievements of limited area modeling. Proceedings of the WMO Programme on Weather Prediction Research, PWPR Rep. Series 1, WMO/TD 479, Appendix 6, 19 pp. [Available from World Meteorological Organization, CP 2300, CH-1211, Genève 2, Switzerland.].

  • Chen, F., Z. Janic, and K. Mitchell, 1997: Impact of atmospheric-surface layer parameterizations in the new land-surface scheme of the NCEP mesoscale Eta numerical model. Bound.-Layer Meteor.,85, 391–421.

  • Chou, M.-D., 1992: A solar radiation model for use in climate studies. J. Atmos. Sci.,49, 762–772.

  • Cotton, W. R., G. Thompson, and P. W. Mielke Jr., 1994: Real-time mesoscale prediction on workstations. Bull. Amer. Meteor. Soc.,75, 349–362.

  • Davies, H. C., 1976: A lateral boundary formulation for multi-level prediction models. Quart. J. Roy. Meteor. Soc.,102, 405–418.

  • DiMego, G. J., K. E. Mitchell, R. A. Petersen, J. E. Hoke, J. P. Gerrity, J. J. Tuccillo, R. L. Wobus, and H.-M. H. Juang, 1992: Changes to NMC’s regional analysis and forecast system. Wea. Forecasting,7, 185–198.

  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci.,46, 3077–3107.

  • ——, 1993: A nonhydrostatic version of the Penn State–NCAR Mesoscale Model: Validation tests and simulation of an Atlantic cyclone and cold front. Mon. Wea. Rev.,121, 1493–1513.

  • Fels, S. B., and M. D. Schwarztkopf, 1975: The simplified exchange approximation: A new method for radiative transfer calculations. J. Atmos. Sci.,32, 1475–1488.

  • Grell, G. A., 1993: Prognostic evaluation of assumptions used by cumulus parameterizations. Mon. Wea. Rev.,121, 764–787.

  • ——, J. Dudhia, and D. R. Stauffer, 1994: A description of the fifth-generation Penn State/NCAR Mesoscale Model (MM5). NCAR Tech. Note NCAR/TN 398+STR, 138 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • Gyakum, J. R., and Coauthors, 1996: A regional model intercomparison using a case of explosive oceanic cyclogenesis. Wea. Forecasting,11, 521–543.

  • Harshvardhan, R. Davies, D. A. Randall, and T. G. Corsetti, 1987: A fast radiation parameterization for atmospheric models. J. Geophys. Res.,92, 1009–1016.

  • Hoke, J. E., N. A. Phillips, G. J. DiMego, J. J. Tuccillo, and J. G. Sela, 1989: The Regional Analysis and Forecast System of the National Meteorological Center. Wea. Forecasting,4, 323–334.

  • Horel, J. D., and C. V. Gibson, 1994: Analysis and simulation of a winter storm over Utah. Wea. Forecasting,9, 479–494.

  • Hsie, E.-Y., R. A. Anthes, and D. Keyser, 1984: Numerical simulation of frontogenesis in a moist atmosphere. J. Atmos. Sci.,41, 2581–2594.

  • Janić, Z. I., 1994: The step-mountain Eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev.,122, 927–945.

  • Kalnay, E., M. Kanamitsu, and W. E. Baker, 1990: Global numerical weather prediction at the National Meteorological Center. Bull. Amer. Meteor. Soc.,71, 1410–1428.

  • Kanamitsu, M., 1989: Description of the NMC Global Data Assimilation and Forecast System. Wea. Forecasting,4, 335–342.

  • ——, and Coauthors, 1991: Recent changes implemented into the global forecast system at NMC. Wea. Forecasting,6, 425–435.

  • Klemp, J. B., and D. R. Durran, 1983: An upper boundary condition permitting internal gravity wave radiation in numerical mesoscale models. Mon. Wea. Rev.,111, 430–444.

  • Kuo, H. L., 1965: On the formation and intensification of tropical cyclones through latent heat release by cumulus convection. J. Atmos. Sci.,22, 40–63.

  • Lacis, A. A., and J. E. Hansen, 1974: A parameterization of the absorption of solar radiation in the earth’s atmosphere. J. Atmos. Sci.,31, 118–133.

  • Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus,21, 289–307.

  • Mellor, G. L., and T. Yamada, 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys. Space Phys.,20, 851–875.

  • Mesinger, F., 1996: Improvements in quantitative precipitation forecasts with the Eta regional model at the National Centers for Environmental Prediction: The 48-km upgrade. Bull. Amer. Meteor. Soc.,77, 2637–2649.

  • ——, 1998: Comparison of the quantitative precipitation forecasts by the 48- and by the 29-km ETA Model: An update and possible implications. Preprints, 12th Conf. on Numerical Weather Prediction, Phoenix, AZ, Amer. Meteor. Soc., J22–J23.

  • Paegle, J., and D. W. McLawhorn, 1983: Numerical modeling of diurnal convergence oscillations above sloping terrain. Mon. Wea. Rev.,111, 67–85.

  • ——, and T. Vukicevic, 1987: On the predictability of low-level flow during ALPEX. Meteor. Atmos. Phys.,36, 45–60.

  • ——, R. A. Pielke, G. A. Dalu, W. Miller, J. R. Garratt, T. Vukicevic, G. Berri, and M. Nicolini, 1990: Predictability of flows over complex terrain. Atmospheric Processes over Complex Terrain, William Blumen, Ed., Amer. Meteor. Soc., 285–300.

  • ——, K. C. Mo, and J. N. Paegle, 1996: Dependence of simulated precipitation on surface evaporation during the 1993 United States summer floods. Mon. Wea. Rev.,124, 345–361.

  • ——, Q. Yang, and M. Wang, 1997: Predictability in limited area and global models. Meteor. Atmos. Phys.,63, 53–69.

  • Pan, H. L., and W. S. Wu, 1995: Implementing a mass flux convection parameterization package for the NMC medium-range forecast model. NWS Office Note 409, 43 pp. [Available from National Centers for Environmental Prediction, NOAA Science Center, Room 101, 5200 Auth Rd., Camp Springs, MD 20746.].

  • Petersen, R. A., G. J. DiMego, J. E. Hoke, K. E. Mitchell, J. P. Gerrity, R. L. Wobus, H. H. Juang, and M. J. Pecnick, 1991: Changes to NMC’s regional analysis and forecast system. Wea. Forecasting,6, 133–141.

  • Phillips, N. A., 1981: A simpler way to initiate condensation at relative humidities below 100 percent. NMC Office Note 242, 14 pp. [Available from National Meteorological Center, National Weather Service, Camp Springs, MD 20746.].

  • Roads, J. O., and T. N. Maisel, 1991: Evaluation of the National Meteorological Center’s Medium-Range Forecast Model precipitation forecasts. Wea. Forecasting,6, 123–132.

  • ——, ——, and J. Alpert, 1991: Further evaluation of the National Meteorological Center’s Medium-Range Forecast Model precipitation forecasts. Wea. Forecasting,6, 483–497.

  • Rogers, E., T. Black, D. Deaven, G. DiMego, Q. Zhao, Y. Lin, N. W. Junker, and M. Baldwin, 1995: Changes to the NMC operational Eta model analysis/forecast system. NWS Tech. Procedures Bull. 423, 51 pp. [Available from National Weather Service, Office of Meteorology, 1325 East–West Highway, Silver Springs, MD 20910.].

  • ——, ——, ——, ——, ——, M. Baldwin, and N. M. Junker, 1996:Changes to the operational “early” Eta analysis/forecast system at the National Centers for Environmental Prediction. Wea. Forecasting,11, 391–413.

  • Schwarzkopf, M. D., and S. B. Fels, 1991: The simplified exchange method revisited: An accurate, rapid method for computation of infrared cooling rates and fluxes. J. Geophys. Res.,96, 9075–9096.

  • Smith, R., and Coauthors, 1997: Local and remote effect of mountains on weather: Research needs and opportunities. Bull. Amer. Meteor. Soc.,78, 877–892.

  • Swanson, R. T., 1995: Evaluation of the mesoscale Eta Model over the western United States. M.S. thesis, Dept. of Meteorology, University of Utah, 113 pp. [Available from University of Utah, Salt Lake City, UT 84112.].

  • Troen, I., and L. Mahrt, 1986: A simple model of the atmospheric boundary layer: Sensitivity to surface evaporation. Bound.-Layer Meteor.,37, 129–148.

  • Waldron, K. M., 1994: Sensitivity of local model prediction to large scale forcing. Ph.D. dissertation, University of Utah, 150 pp. [Available from University of Utah, Salt Lake City, UT 84112.].

  • ——, J. Paegle, and J. D. Horel, 1996: Sensitivity of a spectrally filtered and nudged limited-area model to outer model options. Mon. Wea. Rev.,124, 529–547.

  • Warner, T. T., and N. L. Seaman, 1990: A real-time mesoscale numerical weather-prediction system used for research, teaching, and public service at The Pennsylvania State University. Bull. Amer. Meteor. Soc.,71, 792–805.

  • ——, R. A. Peterson, and R. E. Treadon, 1997: A tutorial on lateral boundary conditions as a basic and potentially serious limitation to regional numerical weather prediction. Bull. Amer. Meteor. Soc.,78, 2599–2617.

  • Williamson, D. L., J. T. Keihl, V. Ramanathan, R. E. Dickinson, and J. J. Hack, 1987: Description of NCAR Community Climate Model (CCM1). NCAR Tech. Note NCAR/TN-285+STR, 112 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • WMO, 1995: International Workshop on Limited-Area and Variable Resolution Models. PWPR Rep. Series 7, WMO/TD 699, 386 pp. [Available from World Meteorological Organization, CP 2300, CH-1211, Geneva 2, Switzerland.].

  • Zeng, X., and R. A. Pielke, 1993: Error-growth dynamics and predictability of surface thermally induced atmospheric flow. J. Atmos. Sci.,50, 2817–2844.

  • Zhang, D., and R. A. Anthes, 1982: A high-resolution model of the planetary boundary layer—sensitivity tests and comparisons with SESAME-79 data. J. Appl. Meteor.,21, 1594–1609.

  • Zhao, Q., and F. H. Carr, 1997: A prognostic cloud scheme for operational NWP models. Mon. Wea. Rev.,125, 1931–1953.

  • ——, T. L. Black, and M. E. Baldwin, 1997: Implementation of the cloud prediction scheme in the Eta Model at NCEP. Wea. Forecasting,12, 697–711.

Fig. 1.
Fig. 1.

Utah LAM topography. Contours at 10, 100, and 250 m, then increments of 250 m to 3250 m. Gray shading contrasts at 250, 1000, 2000, and 3000 m.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 2.
Fig. 2.

MM5 topography. Contours at 10, 100, and 250 m, then increments of 250 m to 3000 m. Gray shading contrasts at 250, 1000, 2000, and 3000 m.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 3.
Fig. 3.

Gridpoint locations for (a) Eta and NGM, (b) Meso Eta, and (c) MRF models.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 4.
Fig. 4.

Rawinsonde locations used for forecast validation.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 5.
Fig. 5.

The 700-hPa temperature bias error analysis at 24 h [every 0.2°C; solid (dashed) lines denote positive (negative) values]: (a) ETA, (b) MRF, (c) MM5, (d) NGM, (e) Meso Eta, and (f) Utah LAM.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 6.
Fig. 6.

The 700-hPa temperature mse analysis at 24 h (every 0.2°C): (a) ETA, (b) MRF, (c) MM5, (d) NGM, (e) Meso Eta, and (f) Utah LAM.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 7.
Fig. 7.

Model mse’s at 0, 12, 24, and 36 h: (a) 700-hPa temperature, (b) 500-hPa geopotential height, (c) 700-hPa relative humidity, and (d) 300-hPa vector wind.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 8.
Fig. 8.

Precipitation mse analysis for 24-h period ending at the 36-h forecast during the months of Oct, Nov, and Dec, 1997. Models verified are the MRF, Meso Eta, and MM5.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 9.
Fig. 9.

Vertical mse profiles at Salt Lake City for the Eta (thin solid), NGM (thin short-dashed), MRF (dotted), Meso Eta (long-dashed), MM5 (thick dashed), and Utah LAM (thick solid): (a) 24-h

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 9.
Fig. 9.

(Continued) temperature, (b) 36-h temperature, (c) 24-h geopotential height, (d) 36-h geopotential height, (e) 24-h relative humidity, (f) 36-h relative humidity, (g) 24-h zonal wind, and (h) 36-h zonal wind.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 10.
Fig. 10.

The 700-hPa temperature error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 11.
Fig. 11.

The 500-hPa geopotential height error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 12.
Fig. 12.

The 700-hPa relative humidity error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Fig. 13.
Fig. 13.

The 300-hPa zonal wind error time series for Feb 1996 at Salt Lake City, Utah: (a) Eta, (b) NGM, (c) MRF, (d) Meso Eta, (e) MM5, and (f) Utah LAM. Solid, short-dashed, long-dashed, and dotted lines represent 0-, 12-, 24-, and 36-h forecasts, respectively. Times for which a forecast was validated indicated with diamond, plus, cross, and star symbols.

Citation: Weather and Forecasting 14, 1; 10.1175/1520-0434(1999)014<0084:STFVOS>2.0.CO;2

Table 1.

Bias errors from all rawinsondes at forecast hours (a) 0, (b) 12, (c) 24, and (d) 36. Smallest (largest) bias error magnitudes identified with bold (italic) font. Overall category is the number of times the model had the smallest bias error magnitude.

Table 1.
Table 1.

(Continued)

Table 1.
Table 2.

Mse’s from all rawinsondes at forecast hours (a) 0, (b) 12, (c) 24, and (d) 36. Smallest (largest) mse magnitudes identified with bold (italic) font. Overall category is the number of times the model had the smallest mse magnitude.

Table 2.
Table 2.

(Continued)

Table 2.
Table 3.

36-h mse’s at Salt Lake City. Smallest (largest) bias error magnitudes identified with bold (italic) font. Overall category is the number of times the model had the smallest bias error magnitude.

Table 3.
Table 4.

Utah LAM mse’s from all rawinsondes at forecast hours (a) 24 and (b) 36. Configurations include Eta, NGM, and MRF initial and boundary conditions as well as Eta and MRF initial and boundary conditions with internal nudging (ETAN and MRFN, respectively).Smallest (largest) mse magnitudes identified with bold (italic) font.Overall category represents the number of times the configuration had the smallest MSE magnitude.

Table 4.

1

On 9 February 1998, the Eta changed to 32-km horizontal resolution with 45 vertical layers.

2

The Meso Eta was discontinued by NCEP on 3 June 1998.

3

Changed to T170 with 42 layers for projections out to 84 h on 15 June 1998.

4

El Paso, Tucson, and San Diego were not used for the MM5 validation because they are located outside or near that model’s lateral boundary.

Save