We are indebted to Gordana Sindic-Rancic and Chenjie Huang for assistance in data preparation. A portion of the work was funded by NOAA’s Nextgen Weather Program. This paper is the responsibility of the authors and does not necessarily represent the views of the NWS or any other governmental agency.
Benjamin, S. G., and Coauthors, 2016: A North American hourly assimilation and model forecast cycle: The Rapid Refresh. Mon. Wea. Rev., 144, 1669–1694, doi:10.1175/MWR-D-15-0242.1.
Bocchieri, J. R., and H. R. Glahn, 1972: Use of model output statistics for predicting ceiling height. Mon. Wea. Rev., 100, 869–879, doi:10.1175/1520-0493(1972)100<0869:UOMOSF>2.3.CO;2.
Dallavalle, J. P., M. C. Erickson, and J. C. Maloney III, 2004: Model output statistics (MOS) guidance for short- range projections. Preprints, 20th Conf. on Weather Analysis and Forecasting/16th Conf. on Numerical Weather Prediction, Seattle, WA, Amer. Meteor. Soc., 6.1. [Available online at https://ams.confex.com/ams/84Annual/techprogram/paper_73764.htm.]
Donaldson, R., R. Dyer, and M. Krauss, 1975: An objective evaluator of techniques for predicting severe weather events. Preprints, Ninth Conf. on Severe Local Storms, Norman, OK, Amer. Meteor. Soc., 321–326.
Gerrity, J. P., 1992: A note on Gandin and Murphy’s equitable skill score. Mon. Wea. Rev., 120, 2709–2712, doi:10.1175/1520-0493(1992)120<2709:ANOGAM>2.0.CO;2.
Ghirardelli, J. E., and B. Glahn, 2010: The Meteorological Development Laboratory’s aviation weather prediction system. Wea. Forecasting, 25, 1027–1051, doi:10.1175/2010WAF2222312.1.
Gilbert, K. K., J. P. Craven, T. M. Hamill, D. R. Novak, D. P. Ruth, J. Settelmaier, J. E. Sieveking, and B. Veenhuis Jr., 2016: The national blend of models, version one. Preprints, 23th Conf. on Probability and Statistics in the Atmospheric Sciences, New Orleans, LA, Amer. Meteor. Soc., 1.3. [Available online at https://ams.confex.com/ams/96Annual/webprogram/Paper285973.html.]
Glahn, B., and D. P. Ruth, 2003: The new digital forecast database of the National Weather Service. Bull. Amer. Meteor. Soc., 84, 195–201, doi:10.1175/BAMS-84-2-195.
Glahn, B., and J. Wiedenfeld, 2006: Insuring temporal consistency in short range statistical weather forecasts. Preprints, 18th Conf. on Probability and Statistics in the Atmospheric Sciences, Atlanta, GA, Amer. Meteor. Soc., 6.3. [Available online at https://ams.confex.com/ams/pdfpapers/103378.pdf.]
Glahn, B., and J.-S. Im, 2015: Objective analysis of visibility and ceiling height observations and forecasts. MDL Office Note 15-2, NWS/Meteorological Development Laboratory, 17 pp. [Available online at https://www.weather.gov/media/mdl/MDL_OfficeNote15-2.pdf.]
Glahn, B., K. Gilbert, R. Cosgrove, D. P. Ruth, and K. Sheets, 2009: The gridding of MOS. Wea. Forecasting, 24, 520–529, doi:10.1175/2008WAF2007080.1.
Glahn, B., R. Yang, and J. Ghirardelli, 2014: Combining LAMP and HRRR visibility forecasts. MDL Office Note 14-2, NWS/Meteorological Development Laboratory, 20 pp.
Glahn, H. R., and D. A. Lowry, 1972: The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteor., 11, 1203–1211, doi:10.1175/1520-0450(1972)011<1203:TUOMOS>2.0.CO;2.
Im, J.-S., and B. Glahn, 2012: Objective analysis of hourly 2-m temperature and dewpoint observations at the Meteorological Development Laboratory. Natl. Wea. Dig., 36 (2), 103–114.
Miller, R. G., 1964: Regression estimation of event probabilities. U.S. Weather Bureau Tech. Rep. 1, prepared by The Travelers Research Center, Hartford, CT, 153 pp. [Available online at http://www.dtic.mil/dtic/tr/fulltext/u2/602037.pdf.]
OFCM, 1995: Surface weather observations and reports. Federal Meteorological Handbook, No. 1, Rep. FCM-H1-2005, NOAA/Office of the Federal Coordinator for Meteorological Services and Supporting Research, 104 pp. [Available online at http://www.ofcm.gov/publications/fmh/FMH1/FMH1.pdf.]
Palmer, W. C., and R. A. Allen, 1949: Note on the accuracy of forecasts concerning the rain problem. Weather Bureau Manuscript, 2 pp.
Shaefer, J. T., 1990: The critical success index as an indicator of warning skill. Wea. Forecasting, 5, 570–575, doi:10.1175/1520-0434(1990)005<0570:TCSIAA>2.0.CO;2.
Palmer and Allen suggested the name because the event being forecasted and evaluated was thought to be a threat. The TS is the same as the critical success index proposed by Donaldson et al. (1975) and discussed by Shaefer (1990).
This work is unpublished. The developers did much work in the early part of the LAMP project using various transformations of the visibility and ceiling height observations as predictands. This work was largely unsuccessful; reliable and skillful forecasts could not be made of the lowest values.
Bias for a categorical variable (event) is defined as the number of forecast events divided by the number of observed events.
LAMP forecasts are made for more than 1552 stations in operations, but we used only the ones that had observations and forecast equations when the development was done several years ago.
While the spot removal has some characteristics of smoothing, it is not smoothing in the usual sense where averages are computed. The integrity of “unusual” values is maintained when the area covered is of sufficient size or a number of unusual values are close together, even though they are not contiguous. No change of value is made unless the elevation difference among the points involved is <100 m, so that variations that may be due to terrain are maintained.
This postprocessing removes spots as large as 12.5 km across, while the preprocessing removes 7.5-km spots.