To ensure quality data from a meteorological observing network, a well-designed quality control system is vital. Automated quality assurance (QA) software developed by the Oklahoma Mesonetwork (Mesonet) provides an efficient means to sift through over 500 000 observations ingested daily from the Mesonet and from a Micronet sponsored by the Agricultural Research Service of the United States Department of Agriculture (USDA). However, some of nature's most interesting meteorological phenomena produce data that fail many automated QA tests. This means perfectly good observations are flagged as erroneous.
Cold air pooling, “inversion poking,” mesohighs, mesolows, heat bursts, variations in snowfall and snow cover, and microclimatic effects produced by variations in vegetation are meteorological phenomena that pose a problem for the Mesonet's automated QA tests. Despite the fact that the QA software has been engineered for most observations of real meteorological phenomena to pass the various tests—but is stringent enough to catch malfunctioning sensors—erroneous flags are often placed on data during extreme events.
This manuscript describes how the Mesonet's automated QA tests responded to data captured from microscale meteorological events that, in turn, were flagged as erroneous by the tests. The Mesonet's operational plan is to catalog these extreme events in a database so QA flags can be changed manually by expert eyes.