• Barnes, S. L., 1964: A technique for maximizing details in numerical weather map analysis. J. Appl. Meteor., 3 , 396409.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Breaker, L. C., , Gemmill W. H. , , and Crosby D. S. , 1994: The application of a technique for vector colleration to problems in meteorology and oceanography. J. Appl. Meteor., 33 , 13541365.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brock, F. V., , Crawford K. C. , , Elliott R. L. , , Cuperus G. W. , , Stadler S. J. , , Johnson H. L. , , and Eliats M. D. , 1995: The Oklahoma Mesonet: A technical overview. J. Atmos. Oceanic Technol., 12 , 519.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Greene, J. S., , and Morrissey M. L. , 1998: Evaluation and validation of simulated and observed climate data. Climate Prediction for Agricultural and Resource Management, L. Leslie and R. Munro, Eds., Australian Academy of Sciences, 83–100.

    • Search Google Scholar
    • Export Citation
  • Greene, J. S., , and Morrissey M. L. , . 2000: Validation and uncertainty analysis of satellite rainfall algorithms. Prof. Geogr., 52 , 247257.

  • Henmi, T., , and Dumais R. , 1998: Description of the Battlescale Forecast Model. Army Research Laboratory Tech. Rep. 1032, 144 pp.

  • Henmi, T., , Lee M. E. , , and Smith T. J. , . 1994: Evaluation of the Battlescale Forecast Model. Proc. of the 1994 Battlefield Atmospherics Conference, White Sands, NM, Battlefield Environment Directorate, U.S. Army Research Laboratory, 245–253.

    • Search Google Scholar
    • Export Citation
  • Isaaks, E. H., , and Srivastava R. M. , 1989: An Introduction to Applied Geostatistics. Oxford University Press, 561 pp.

  • Journel, A. G., , and Huijbregts C. H. J. , 1978: Mining Geostatistics. Academic Press, 600 pp.

  • Michaelsen, J., 1987: Cross-validation in statistical climate forecast models. J. Climate Appl. Meteor., 26 , 15891600.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Myers, J. C., 1997: Geostatistical Error Management: Quantifying Uncertainty for Environmental Sampling and Mapping. Van Nostrand Reinhold Press, 571 pp.

    • Search Google Scholar
    • Export Citation
  • Wackernagel, H., 1994: Cokriging versus kriging in regionalized multivariate data analysis. Geoderma, 62 , 8392.

  • Yamada, T., 1981: A numerical simulation of nocturnal drainage flow. J. Meteor. Soc. Japan, 59 , 108122.

  • Yamada, T., , and Bunker S. , 1988: Development of a nested grid, second moment turbulence closure model and application to the ASCOT Brush Creek data simulation. J. Appl. Meteor., 27 , 562578.

    • Crossref
    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 30 30 6
PDF Downloads 5 5 2

An Examination of the Uncertainty in Interpolated Winds and Its Effect on the Validation and Intercomparison of Forecast Models

View More View Less
  • 1 Department of Geography, University of Oklahoma, Norman, Oklahoma
  • | 2 Environmental Verification and Analysis Center, University of Oklahoma, Norman, Oklahoma
  • | 3 Aviation Weather Center, National Weather Service, Kansas City, Missouri
  • | 4 Battlefield Environment Division, Army Research Laboratory, White Sands, New Mexico
© Get Permissions Rent on DeepDyve
Restricted access

Abstract

Meteorological models need to be compared to long-term, routinely collected meteorological data. Whenever numerical forecast models are validated and compared, verification winds are normally interpolated to individual model grid points. To be statistically significant, differences between model and verification data must exceed the uncertainty of verification winds due to instrument error, sampling, and interpolation. This paper will describe an approach to examine the uncertainty of interpolated boundary layer winds and illustrate its practical effects on model validation and intercomparison efforts. This effort is part of a joint model validation project undertaken by the Environmental Verification and Analysis Center at the University of Oklahoma (http://www.evac.ou.edu) and the Battlefield Environment Directorate of the Army Research Laboratory. The main result of this study is to illustrate that it is crucial to recognize the errors inherent in gridding verification winds when conducting model validation and intercomparison work. Defendable model intercomparison results may rely on proper scheduling of model tests with regard to seasonal wind climatology and choosing instrument networks and variogram functions capable of providing adequately small errors due to sampling and imperfect modeling. Thus, it is important to quantify verification wind uncertainty when stating forecast errors or differences in the accuracy of forecast models.

Corresponding author address: Scott Greene, Department of Geography, University of Oklahoma, 100 E. Boyd St., Norman, OK 73019. Email: jgreene@ou.edu

Abstract

Meteorological models need to be compared to long-term, routinely collected meteorological data. Whenever numerical forecast models are validated and compared, verification winds are normally interpolated to individual model grid points. To be statistically significant, differences between model and verification data must exceed the uncertainty of verification winds due to instrument error, sampling, and interpolation. This paper will describe an approach to examine the uncertainty of interpolated boundary layer winds and illustrate its practical effects on model validation and intercomparison efforts. This effort is part of a joint model validation project undertaken by the Environmental Verification and Analysis Center at the University of Oklahoma (http://www.evac.ou.edu) and the Battlefield Environment Directorate of the Army Research Laboratory. The main result of this study is to illustrate that it is crucial to recognize the errors inherent in gridding verification winds when conducting model validation and intercomparison work. Defendable model intercomparison results may rely on proper scheduling of model tests with regard to seasonal wind climatology and choosing instrument networks and variogram functions capable of providing adequately small errors due to sampling and imperfect modeling. Thus, it is important to quantify verification wind uncertainty when stating forecast errors or differences in the accuracy of forecast models.

Corresponding author address: Scott Greene, Department of Geography, University of Oklahoma, 100 E. Boyd St., Norman, OK 73019. Email: jgreene@ou.edu

Save