Search Results

You are looking at 1 - 3 of 3 items for

  • Author or Editor: L. Robin Brody x
  • Refine by Access: All Content x
Clear All Modify Search
Patrick A. Harr
,
Ted L. Tsui
, and
L. Robin Brody

Abstract

Many numerical model verification schemes are handicapped by their inability to separate non-systematic errors and systematic errors. In this study, for a specific synoptic event, a statistical method is described to determine a minimum number of cases which can be averaged to represent numerical forecast errors which are truly systematic and not smoothed fields of rapidly varying non-systematic errors.

Error patterns derived from forecasts and observations stored at Fleet Numerical Oceanography Center are used to compare a systematic error pattern, defined by the total number of available cases with subset error patterns to determine the minimum number of cases needed to filter out the unwanted non-systematic error components. The analysis indicates that a minimum of 8 cases must be averaged to adequately identify systematic errors in a 24 h forecast of a Shanghai Low. A minimum of 5 cases are needed for a 72 h forecast of the same event. Error patterns are identified by contours of the Student's t statistic calculated at each grid point. This contour pattern objectively determines the significance of the forecast errors and is shown to be a very useful method of portraying, systematic forecast errors.

Full access
Paul M. Tag
,
Richard L. Bankert
, and
L. Robin Brody

Abstract

Using imagery from NOAA’s Advanced Very High Resolution Radiometer (AVHRR) orbiting sensor, one of the authors (RLB) earlier developed a probabilistic neural network cloud classifier valid over the world’s maritime regions. Since then, the authors have created a database of nearly 8000 16 × 16 pixel cloud samples (from 13 Northern Hemispheric land regions) independently classified by three experts. From these samples, 1605 were of sufficient quality to represent 11 conventional cloud types (including clear). This database serves as the training and testing samples for developing a classifier valid over land. Approximately 200 features, calculated from a visible and an infrared channel, form the basis for the computer vision analysis. Using a 1–nearest neighbor classifier, meshed with a feature selection method using backward sequential selection, the authors select the fewest features that maximize classification accuracy. In a leave-one-out test, overall classification accuracies range from 86% to 78% for the water and land classifiers, with accuracies at 88% or greater for general height-dependent groupings. Details of the databases, feature selection method, and classifiers, as well as example simulations, are presented.

Full access
William M. Clune
,
Patrick A. Harr
, and
L. Robin Brody

Abstract

The Quality Control (QC) Division of the U.S. Navy's Fleet Numerical Oceanography Center (FNOC) is responsible for the quality control of meteorological and oceanographic analyses and forecasts issued to operational users, and for the verification of FNOC numerical model products.

The FNOC QC Division ensures the quality and consistency of data to be included in the meteorological and oceanographic analyses, adding artificial data (“bogus technique”) when needed in sparse areas or in cases of significant discrepancies. Bogus data from various sources have a direct effect on the optimum interpolation analyses for the global forecast model and are used to modify the marine wind field, the spectral wave model, the upper-level winds for the Optimum Path Aircraft Routing System, and tropical cyclone warnings. Bogus sea surface temperature data are used to enhance the FNOC ocean thermal structure analysis.

The FNOC QC performs model verifications on a daily, monthly, and seasonal basis, providing a statistical summary of the performance of the meteorological and oceanographic models and identifying their strengths and weakness.

Full access