• Baldwin, M. E., , and Wandishin M. S. , 2002: Determining the resolved spatial scales of Eta Model precipitation forecasts. Preprints, 19th Conf. on Weather Analysis and Forecasting, San Antonio, TX, Amer. Meteor. Soc., 85–88.

    • Search Google Scholar
    • Export Citation
  • Baldwin, M. E., , Lakshmivarahan S. , , and Kain J. S. , 2001: Verification of mesoscale features in NWP models. Preprints, Ninth Conf. on Mesoscale Processes, Fort Lauderdale, FL, Amer. Meteor. Soc., 255–258.

    • Search Google Scholar
    • Export Citation
  • Baldwin, M. E., , Kain J. S. , , and Kay M. P. , 2002: Properties of the convection scheme in NCEP's Eta Model that affect forecast sounding interpretation. Wea. Forecasting, 17 , 10631079.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Black, T. L., 1994: The new NMC mesoscale Eta Model: Description and forecast examples. Wea. Forecasting, 9 , 265278.

  • Charba, J. P., , Reynolds D. W. , , McDonald B. E. , , and Carter G. M. , 2003: Comparative verification of recent quantitative precipitation forecasts in the National Weather Service: A simple approach for scoring forecast accuracy. Wea. Forecasting, 18 , 161183.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Craven, J. P., , Jewell R. E. , , and Brooks H. E. , 2002: Comparisons between observed convective cloud-base heights and lifting condensation level for two different lifted parcels. Wea. Forecasting, 17 , 885890.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., , and Flueck J. A. , 1989: Forecasting and verifying in a field research project: DOPLIGHT '87. Wea. Forecasting, 4 , 97109.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evans, J. S., , and Doswell C. A. III, 2001: Examination of derecho environments using proximity soundings. Wea. Forecasting, 16 , 329342.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hamill, T., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14 , 155167.

  • Janjić, Z. I., 1994: The step-mountain eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122 , 927945.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johns, R. H., , and Doswell C. A. III, 1992: Severe local storms forecasting. Wea. Forecasting, 7 , 588612.

  • Kain, J. S., , and Fritsch J. M. , 1993: Convective parameterization for mesoscale models: The Kain–Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., , Goss S. M. , , and Baldwin M. E. , 2000: The melting effect as a factor in precipitation-type forecasting. Wea. Forecasting, 15 , 700714.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., , Baldwin M. E. , , and Weiss S. J. , 2003a: Parameterized updraft mass flux as a predictor of convective intensity. Wea. Forecasting, 18 , 106116.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., , Janish P. R. , , Weiss S. J. , , Baldwin M. E. , , Schneider R. , , and Brooks H. E. , 2003b: Collaboration between forecasters and research scientists at the NSSL and SPC: The Spring Program. Bull. Amer. Meteor. Soc., in press.

    • Search Google Scholar
    • Export Citation
  • Kay, M. P., , and Baldwin M. E. , 2002: Combining objective and subjective information to improve forecast evaluation. Preprints, 19th Conf. on Weather Analysis and Forecasting, San Antonio, TX, Amer. Meteor. Soc., 411–414.

    • Search Google Scholar
    • Export Citation
  • McDonald, B. E., 1998: Sensitivity of precipitation forecast skill to horizontal resolution. Ph.D. dissertation, University of Utah, 135 pp. [Available online at ftp://ftp.hpc.ncep.noaa.gov/brett/diss/.].

    • Search Google Scholar
    • Export Citation
  • Mesinger, F., 1996: Improvements in quantitative precipitation forecasts with the Eta regional model at the National Centers for Environmental Prediction: The 48-km upgrade. Bull. Amer. Meteor. Soc., 77 , 26372650.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moller, A. R., , Doswell C. A. III, , Foster M. P. , , and Woodall G. R. , 1994: The operational recognition of supercell thunderstorm environments and storm structures. Wea. Forecasting, 9 , 327347.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8 , 281293.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • National Oceanic and Atmospheric Administration, 1999: The modernized end-to-end forecast process for quantitative precipitation information: Hydrometeorological requirements, scientific issues, and service concepts. National Weather Service/Office of Climate, Water, and Weather Services, 187 pp. [Available from Office of Climate, Water, and Weather Services, W/OS, 1325 East West Hwy., Silver Spring, MD 20910.].

    • Search Google Scholar
    • Export Citation
  • Nicholls, N., 2001: Commentary and analysis: The insignificance of significance testing. Bull. Amer. Meteor. Soc., 81 , 981986.

  • Stensrud, D. J., , and Weiss S. J. , 2002: Mesoscale model ensemble forecasts of the 3 May 1999 tornado outbreak. Wea. Forecasting, 17 , 526543.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences: An Introduction. Academic Press, 467 pp.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 104 104 7
PDF Downloads 12 12 1

Subjective Verification of Numerical Models as a Component of a Broader Interaction between Research and Operations

View More View Less
  • 1 Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma and NOAA/National Severe Storms Laboratory, Norman, Oklahoma
  • | 2 Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma and NOAA/National Severe Storms Laboratory, and NOAA/NWS/Storm Prediction Center, Norman, Oklahoma
  • | 3 NOAA/NWS/Storm Prediction Center, Norman, Oklahoma
  • | 4 Cooperative Institute for Research in Environmental Sciences, University of Colorado and NOAA/Forecast Systems Laboratory, Boulder, Colorado
  • | 5 NOAA/NWS/Storm Prediction Center, Norman, Oklahoma
© Get Permissions
Restricted access

Abstract

Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program.

By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how “good” individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the “best” forecast for a given period.

The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.

Corresponding author address: Dr. John S. Kain, NSSL, 1313 Halley Circle, Norman, OK 73069. Email: jack.kain@noaa.gov

Abstract

Systematic subjective verification of precipitation forecasts from two numerical models is presented and discussed. The subjective verification effort was carried out as part of the 2001 Spring Program, a seven-week collaborative experiment conducted at the NOAA/National Severe Storms Laboratory (NSSL) and the NWS/Storm Prediction Center, with participation from the NCEP/Environmental Modeling Center, the NOAA/Forecast Systems Laboratory, the Norman, Oklahoma, National Weather Service Forecast Office, and Iowa State University. This paper focuses on a comparison of the operational Eta Model and an experimental version of this model run at NSSL; results are limited to precipitation forecasts, although other models and model output fields were verified and evaluated during the program.

By comparing forecaster confidence in model solutions to next-day assessments of model performance, this study yields unique information about the utility of models for human forecasters. It is shown that, when averaged over many forecasts, subjective verification ratings of model performance were consistent with preevent confidence levels. In particular, models that earned higher average confidence ratings were also assigned higher average subjective verification scores. However, confidence and verification scores for individual forecasts were very poorly correlated, that is, forecast teams showed little skill in assessing how “good” individual model forecasts would be. Furthermore, the teams were unable to choose reliably which model, or which initialization of the same model, would produce the “best” forecast for a given period.

The subjective verification methodology used in the 2001 Spring Program is presented as a prototype for more refined and focused subjective verification efforts in the future. The results demonstrate that this approach can provide valuable insight into how forecasters use numerical models. It has great potential as a complement to objective verification scores and can have a significant positive impact on model development strategies.

Corresponding author address: Dr. John S. Kain, NSSL, 1313 Halley Circle, Norman, OK 73069. Email: jack.kain@noaa.gov

Save