• Atmospheric Environment Service, 1977: Manual of Surface Weather Observations (MANOBS). 7th ed. Meteorological Service of Canada, 401 pp. [Available from Meteorological Service of Canada, 4905 Dufferin St., Downsview, ON M3H 5T4, Canada.].

  • Battan, L. J., 1973: Radar Observations of the Atmosphere. University of Chicago Press, 324 pp.

  • D’Avirro, J., A. Peters, M. Hanna, P. Dawson, and M. Chaput, 1997:Aircraft ground de/anti-icing fluid holdover time field testing program for the 1996/97 winter. Transportation Development Centre Publication TP 13131E, 233 pp. [Available from Transportation Development Centre, Publication Distribution Services, 800 René Lévesque Blvd. West, Suite 600, Montreal, PQ H3B 1X9, Canada.].

  • Doswell, C. A., III, R. Davies-Jones, and D. L. Keller, 1990: On summary measures of skill in rare event forecasting based on contingency tables. Wea. Forecasting,5, 576–585.

    • Crossref
    • Export Citation
  • Hansen, D. F., W. K. Shubert, and D. R. Rohleder, 1988: A program to improve performance of AFGL automated present weather observing sensors. Air Force Geophysics Laboratory Rep. AFGL-TR-88-0329, 110 pp. [Available from AFGL, Hanscom AFB, MA 01731.].

  • Leroy, M., C. Bellevaux, and J. P. Jacob, 1998: WMO intercomparison of present weather sensors/systems: Canada and France, 1993–1995: Final report. Instruments and Observing Methods Rep. 73, WMO, Geneva, Switzerland, 169 pp.

  • Lonnqvist, J., and P. Nylander, 1992: A present weather instrument. Tech. Conf. on Instruments and Methods of Observation, Vienna, Austria, WMO/TD-No. 462, 167–170.

  • Nadolski, V. L., and M. D. Gifford, 1995: An overview of ASOS algorithms. Preprints, Sixth Conf. on Aviation Weather Systems, Dallas, TX, Amer. Meteor. Soc., 451–455.

  • Sheppard, B. E., 1990: The measurement of raindrop size distributions using a small Doppler radar. J. Atmos. Oceanic Technol.,7, 255–268.

    • Crossref
    • Export Citation
  • van der Meulen, J. P., 1992: Present weather observing systems: One year of experience and comparison with human observations. Tech. Conf. on Instruments and Methods of Observation, Vienna, Austria, WMO/TD-No. 462, 300–304.

  • Wang, T.-i, K. B. Earnshaw, and R. S. Lawrence, 1978: Simplified optical path-averaged rain gauge. Appl. Opt.,17, 384–390.

    • Crossref
    • Export Citation
  • Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences. Academic Press, 464 pp.

  • ——, 1997: Resampling hypothesis tests for autocorrelated fields. J. Climate,10, 65–82.

  • Wilson, R. A., and R. Van Cauwenberghe, 1993: Intercomparison results of two types of freezing rain sensors versus human observers. Tech. Conf. on Instruments and Methods of Observation, Vienna, Austria, WMO/TD-No. 462, 167–170.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 117 117 4
PDF Downloads 22 22 1

Automated Precipitation Detection and Typing in Winter: A Two-Year Study

View More View Less
  • 1 Meteorological Service of Canada, Downsview, Ontario, Canada
© Get Permissions Rent on DeepDyve
Restricted access

Abstract

Precipitation detection and typing estimates from four sensors are evaluated using standard operational meteorological observations as a reference. All are active remote sensors radiating into a measurement volume near the sensor, and measuring some property of the scattered radiation. Three of the sensors use optical wavelengths and one uses microwave wavelengths.

A new analysis approach for comparison of the time series from the observer and instruments is presented. This approach reduces the mismatch of the different sampling intervals and response times of humans and instruments when observing precipitation events. The algorithm postprocesses the minutely output estimates from the sensors in a window of time around the nominal time of the human observation.

The paper examines the relationship between the probability of detection and false alarm ratio for these sensors. In order to represent the “trade-off” between these parameters, the Heidke skill score is used as a figure of merit in comparing performance. A statistical method is presented to test for significance in differences of this score between sensors.

Each of the sensors demonstrated skill in identifying and typing the precipitation. The window analysis showed improved scores compared to simultaneous comparisons. The difference is attributed to the analysis technique that was designed to approximate the observing instructions for the standard observations. The data processing algorithm that gave the highest Heidke skill score for each sensor resulted in identification scores of 79% for the microwave sensor but only 39%–40% for the three optical sensors when rain was reported by the observer. The identification score in snow was 63% for the microwave sensor and in the range of 53%–71% for the optical sensors. Use of a multiparameter algorithm with the microwave sensor improved the identification of both snow and drizzle over the report from the sensor alone.

Corresponding author address: B. E. Sheppard, Meteorological Service of Canada, 4905 Dufferin St., Downsview, ON M3H 5T4, Canada.

Email: brian.sheppard@ec.gc.ca

Abstract

Precipitation detection and typing estimates from four sensors are evaluated using standard operational meteorological observations as a reference. All are active remote sensors radiating into a measurement volume near the sensor, and measuring some property of the scattered radiation. Three of the sensors use optical wavelengths and one uses microwave wavelengths.

A new analysis approach for comparison of the time series from the observer and instruments is presented. This approach reduces the mismatch of the different sampling intervals and response times of humans and instruments when observing precipitation events. The algorithm postprocesses the minutely output estimates from the sensors in a window of time around the nominal time of the human observation.

The paper examines the relationship between the probability of detection and false alarm ratio for these sensors. In order to represent the “trade-off” between these parameters, the Heidke skill score is used as a figure of merit in comparing performance. A statistical method is presented to test for significance in differences of this score between sensors.

Each of the sensors demonstrated skill in identifying and typing the precipitation. The window analysis showed improved scores compared to simultaneous comparisons. The difference is attributed to the analysis technique that was designed to approximate the observing instructions for the standard observations. The data processing algorithm that gave the highest Heidke skill score for each sensor resulted in identification scores of 79% for the microwave sensor but only 39%–40% for the three optical sensors when rain was reported by the observer. The identification score in snow was 63% for the microwave sensor and in the range of 53%–71% for the optical sensors. Use of a multiparameter algorithm with the microwave sensor improved the identification of both snow and drizzle over the report from the sensor alone.

Corresponding author address: B. E. Sheppard, Meteorological Service of Canada, 4905 Dufferin St., Downsview, ON M3H 5T4, Canada.

Email: brian.sheppard@ec.gc.ca

Save