Search Results

You are looking at 1 - 10 of 14 items for

  • Author or Editor: James E. Peak x
  • Refine by Access: All Content x
Clear All Modify Search
James E. Peak and Russell L. Elsberry

Abstract

Prediction of tropical cyclone motion in terms of cross-track (CT) and along-track (AT) components is proposed as an alternative to geographic (zonal and meridional) components. Since the CT and AT components are defined relative to an extrapolated track based on the current and − 12 h wanting positions, the CT and AT components are representative of the important turning motion and apparent speed changes along the track. A discriminant analysis approach is used to determine which of the persistence-type and predictors and empirical orthogonal functions of the geopotential fields are most relevant. Classification functions are derived to predict the future CT and AT tercile group. The scheme correctly selects 45% of the CT and 50% of the AT classifications versus 33% due to random chance.

Based on the results of the discriminant analysis, the sample of cases is stratified into five subgroups in terms of the past 12 h storm heading and speed. Separate regression equations are derived for the subgroups, which are taken to represent different environmental conditions. Those storms moving to the northwest are best predicted by the scheme with mean 72 h forecast errors of 490 and 506 km for the slow- and fast-moving categories, respectively. The weighted mean of the subgroup 72 h track errors is 566 km, which is smaller than the long-term mean JTWC error of 610 km. The ability to predict storms which deviate from their previous course while maintaining this level of forecast skill is a major advantage of these schemes.

Full access
James E. Peak and Russell L. Elsberry

Abstract

A number of tropical cyclone track forecast aids are available to the forecasters at the Joint Typhoon Warning Center (JTWC) at Guam. These aids typically provide conflicting guidance and no single aid provides consistently superior guidance in every situation.

The basic assumption in this study is that synoptic factors and storm-related parameters can be used to predict the performance of each objective aid under various situations. The algorithm of Breiman et al. is used to derive objectively a classification tree to select which of eight objective aids has the lowest 72-h forecast error. The path by which each case traverses the classification tree consists of a series of branches or “decisions.” These branches, which ultimately result in the selection of an objective aid to be utilized in each case, may be physically interpreted in most cases. The branches of the classification tree in this study are highly dependent upon empirical orthogonal function coefficient values of the environmental wind fields, especially those at 700 mb, which are used to represent the synoptic forcing. The tree correctly classifies 44% (23%) of the dependent (independent) sample cases compared to 13.5% by random chance. The mean 72-h forecast error is 537 km for the dependent sample and 592 km for the independent sample, whereas the corresponding CLIPER errors are 703 and 635 km, respectively. The JTWC errors for a nearly homogeneous sample are 721 and 654 km, respectively. Discriminant analysis is presented as an alternate classification method. The discriminant analysis functions correctly classify 37% (18%) of the dependent (independent) sample cases, with forecast errors of 559 km and 636 km for the dependent and independent samples, respectively.

Monte Carlo simulations of the process for selecting the aid with the minimum 72-h forecast error indicate that the selection process includes a large random contribution. Inclusion of more objective aids leads to greater reductions in forecast errors, but this does not provide an appropriate estimate of the potential accuracy of the optimal aid selection process. The classification tree provides an objective method by which conflicting guidance may be better utilized.

Full access
James E. Peak and Russell L. Elsberry

Abstract

The Navy Nested Tropical Cyclone Model (NTCM) is evaluated for performance on Southern Hemisphere storms near Australia. East of 135°E the model exhibits mean forecast errors of 246, 467, and 694 km at 24, 48, and 72 h, respectively. West of 135°E the mean forecast errors are 214, 511, and 745 km at 24, 48, and 72 h. The NTCM tends to have a poleward directional bias in the predicted tracks. This bias may be attributed to the lack of current data, which causes the analysis scheme to revert to climatological values. Storm tracks near the Australian coast also were not forecast well by the NTCM, especially in the western cases, presumably due to lack of consideration of land/sea effects.

In a homogeneous sample comparison with an operational analog prediction technique (TYAN78), and with persistence of the past 12 hours motion, the NTCM performed worse in terms of forecast error at early forecast times and better at late forecast times east of 135°E. To the west of 135°E, the model performance was generally poorer than the other techniques at all forecast times.

The regression post-processing technique of Peak and Elsberry (1983), when applied to the NTCM forecasts, results in a reduction of the eastern region sample forecast errors by as much as 150 km at 72 h. The western region sample forecast improvement is even greater, such that the regression-modified NTCM forecasts are superior to both TYAN78 and persistence in both test regions.

Full access
James E. Peak and Paul M. Tag

The U.S. Navy has plans to develop an automated system to analyze satellite imagery aboard its ships at sea. Lack of time for training, in combination with frequent personnel rotations, precludes the building of extensive imagery interpretation expertise by shipboard personnel. A preliminary design starts from pixel data from which clouds are classified. An image segmentation is performed to assemble and isolate cloud groups, which are then identified (e.g., as a cold front) using neural networks. A combination of neural networks and expert systems is subsequently used to transform key information about the identified cloud patterns as inputs to an expert system that provides sensible weather information, the ultimate objective of the imagery analysis.

Full access
James E. Peak and Paul M. Tag

Abstract

An Expert system for Shipboard Obscuration Prediction (AESOP), an artificial intelligence approach to forecasting maritime visibility obscurations, has been designed, developed, and tested. The problem-solving model for AESOP, running within an IBM-PC environment, is rule-based, uses backward chaining, and has meta-rules; a user, in a consultation session, answers questions about certain atmospheric parameters. The current version, AESOP 2.0, has 232 rules and has been designed in terms of nowcasts (0–1 h) and forecasts (1–6 h). An extensive explanation feature allows the user to understand the reasoning process behind a particular forecast. AESOP has been evaluated against 83 test cases, in which clear, hazy, or foggy conditions are predicted. The overall performance of AESOP is 75% correct. This value indicates considerable forecast skill when compared to 47% for persistence and 41% for random chance. When the distinction between clear and haze is ignored, the expert system correctly forecasts 84% of the “Fog/No fog” situations.

Full access
Paul M. Tag and James E. Peak

Abstract

In recent years, the field of artificial intelligence has contributed significantly to the science of meteorology, most notably in the now familiar form of expert systems. Expert systems have focused on rules or heuristics by establishing, in computer code, the reasoning process of a weather forecaster predicting, for example, thunderstorms or fog. In addition to the years of effort that goes into developing such a knowledge base is the time-consuming task of extracting such knowledge and experience from experts. In this paper, the induction of rules directly from meteorological data is explored-a process called machine learning. A commercial machine learning program called C4.5, is applied to a meteorological problem, forecasting maritime fog, for which a reliable expert system has been previously developed. Two detasets are used: 1) weather ship observations originally used for testing and evaluating the expert system, and 2) buoy measurements taken off the coast of California. For both datasets, the rules produced by C4.5 are reasonable and make physical sense, thus demonstrating that an objective induction approach can reveal physical processes directly from data. For the ship database, the machine-generated rules are not as accurate as those from the expert system but are still significantly better than persistence forecasts. For the buoy data, the forecast accuracies are very high, but only slightly superior to persistence. The results indicate that the machine learning approach is a viable tool for developing meteorological expertise, but only when applied to reliable data with sufficient cases of known outcome. In those instances when such databases are available, the use of machine learning can provide useful insight that otherwise might take considerable human analysis to produce.

Full access
James E. Peak and Paul M. Tag

Abstract

A significant task in the automated interpretation of cloud features on satellite imagery is the segmentation of the image into separate cloud features to be identified. A new technique, hierarchical threshold segmentation (HTS), is presented. In HTS, region boundaries are defined over a range of gray-shade thresholds. The hierarchy of the spatial relationships between collocated regions from different thresholds is represented in tree form. This tree is pruned, using a neural network, such that the regions of appropriate sizes and shapes are isolated. These various regions from the pruned tree are then collected to form the final segmentation of the entire image.

In segmentation testing using Geostationary Operational Environmental Satellite data, HTS selected 94% of 101 dependent sample pruning points correctly, and 93% of 105 independent sample pruning points. Using Advanced Very High Resolution Radiometer data, HTS correctly selected 90% of both the 235-case dependent sample and the 253-case independent sample pruning points.

The strength of this approach is that artificial intelligence, that is, reasoning about the sizes and shapes of the emergent regions, is applied during the segmentation process. The neural network component can be trained to respond more favorably to shapes of interest to a particular analysis problem.

Full access
Russell L. Elsberry and James E. Peak

Abstract

Official and objective forecast aids for tropical cyclone tracks in the western North Pacific during 1979–83 are evaluated in terms of cross-track (CT) and along-track (AT) components relative to an extrapolated track based on warning positions (a persistence forecast). The focus of the study is the 72-h forecasts, which are essential for timely evasion planning. The CT and AT components are divided into three classes (terciles). A scoring system that assess penalty points for forecasts in the incorrect tercile is used to rank the official and objective aids. The One-way Tropical Cyclone Model appears to be most skillful at 72 h based on the tercile score. When a finer resolution (five-class) distribution is tested using a nonlinear penalty point assessment, the Nested Tropical Cyclone Model is shown to provide the most skillful path forecasts at 72 h. Some of the less skillful forecasts are shown to have systematic biases within the quintile distributions. The past 12-h speed and direction are used to define five categories that approximately represent different synoptic forcing. differences in forecast performance of the aids are indicated across the five categories. This study can provide guidance as to the relative merits of the different objective aids, and where improvements in the official forecast might be made.

Full access
Lester E. Carr III, Russell L. Elsberry, and James E. Peak

Abstract

The authors have developed error mechanism conceptual models with characteristic track departures and anomalous wind or sea level pressure patterns for dynamical tropical cyclone track predictions primarily occurring in tropical regions or those associated with midlatitude circulation patterns. These conceptual models were based on a retrospective study in which it was known that the 72-h track error exceeded 300 n mi (555 km). A knowledge-based expert system module named the Systematic Approach Forecast Aid (SAFA) has been developed to assist the forecaster in the information management, visualization, and proactive investigation of the frequently occurring error mechanisms.

A beta test of the SAFA module was carried out for all available track forecasts for the western North Pacific cyclones 19W–30W during 1999. The objective was to determine if the SAFA module could guide the team to apply the conceptual models in a real-time scenario to detect dynamical model tracks likely to have 72-h errors greater than 300 n mi. The metric was a selective consensus from the remaining model tracks that had smaller errors than the nonselective consensus track of all dynamical model tracks.

This beta test demonstrated that the prototype SAFA module with the large-error mechanism conceptual models could be effectively applied in real-time conditions. The beta-test team recognized 14 cases in which elimination of one or more dynamical model forecasts before calculating the consensus track resulted in a 10% improvement over the nonselective consensus track. A number of lessons learned from the beta test are described, including that rejecting a specific model track is not normally successful if the tracks are tightly clustered, that the availability of the model-predicted fields is critical to the error detection, and that at least three of the five model tracks need to be available.

Full access
Donald S. Frankel, James Stark Draper, James E. Peak, and J. Carr McLeod

The Artificial Intelligence (AI) Needs Analysis Workshop provided a forum for representatives from government agencies and private industries to explore ways to use Al to solve various problems. Past accomplishments using Al were presented, and areas where Al might be used in future efforts were identified. Each agency suggested their particular problem areas where Al might be expected to solve difficult problems. Reports from three working groups suggested future research areas in which Al has the potential for success.

Full access