Search Results

You are looking at 11 - 20 of 36 items for

  • Author or Editor: Charles R. Sampson x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
Charles R. Sampson, John A. Knaff, and Edward M. Fukada

Abstract

The Systematic Approach Forecast Aid (SAFA) has been in use at the Joint Typhoon Warning Center since the 2000 western North Pacific season. SAFA is a system designed for determination of erroneous 72-h track forecasts through identification of predefined error mechanisms associated with numerical weather prediction models. A metric for the process is a selective consensus in which model guidance suspected to have 72-h error greater than 300 n mi (1 n mi = 1.85 km) is first eliminated prior to calculating the average of the remaining model tracks. The resultant selective consensus should then provide improved forecasts over the nonselective consensus. In the 5 yr since its introduction into JTWC operations, forecasters have been unable to produce a selective consensus that provides consistent improved guidance over the nonselective consensus. Also, the rate at which forecasters exercised the selective consensus option dropped from approximately 45% of all forecasts in 2000 to 3% in 2004.

Full access
Charles R. Sampson, Paul A. Wittmann, and Hendrik L. Tolman

Abstract

A new algorithm to generate wave heights consistent with tropical cyclone official forecasts from the Joint Typhoon Warning Center (JTWC) has been developed. The process involves generating synthetic observations from the forecast track and the 34-, 50-, and 64-kt wind radii. The JTWC estimate of the radius of maximum winds is used in the algorithm to generate observations for the forecast intensity (wind), and the JTWC-estimated radius of the outermost closed isobar is used to assign observations at the outermost extent of the tropical cyclone circulation. These observations are then interpolated to a high-resolution latitude–longitude grid covering the entire extent of the circulation. Finally, numerical weather prediction (NWP) model fields are obtained for each forecast time, the NWP model forecast tropical cyclone is removed from these fields, and the new JTWC vortex is inserted without blending zones between the vortex and the background. These modified fields are then used as input into a wave model to generate waves consistent with the JTWC forecasts. The algorithm is applied to Typhoon Yagi (2006), in anticipation of which U.S. Navy ships were moved from Tokyo Bay to an area off the southeastern coast of Kyushu. The decision to move (sortie) the ships was based on NWP model-driven long-range wave forecasts that indicated high seas impacting the coast in the vicinity of Tokyo Bay. The sortie decision was made approximately 84 h in advance of the high seas in order to give ships time to steam the approximately 500 n mi to safety. Results from the new algorithm indicate that the high seas would not affect the coast near Tokyo Bay within 84 h. This specific forecast verifies, but altimeter observations show that it does not outperform, the NWP model-driven wave analysis and forecasts for this particular case. Overall, the performance of the new algorithm is dependent on the JTWC tropical cyclone forecast performance, which has generally outperformed those of the NWP model over the last several years.

Full access
John A. Knaff, Charles R. Sampson, and Mark DeMaria

Abstract

The current version of the Statistical Typhoon Intensity Prediction Scheme (STIPS) used operationally at the Joint Typhoon Warning Center (JTWC) to provide 12-hourly tropical cyclone intensity guidance through day 5 is documented. STIPS is a multiple linear regression model. It was developed using a “perfect prog” assumption and has a statistical–dynamical framework, which utilizes environmental information obtained from Navy Operational Global Analysis and Prediction System (NOGAPS) analyses and the JTWC historical best track for development. NOGAPS forecast fields are used in real time. A separate version of the model (decay-STIPS) is produced that accounts for the effects of landfall by using an empirical inland decay model. Despite their simplicity, STIPS and decay-STIPS produce skillful intensity forecasts through 4 days, based on a 48-storm verification (July 2003–October 2004). Details of this model’s development and operational performance are presented.

Full access
John A. Knaff, Charles R. Sampson, and Kate D. Musgrave

Abstract

This work describes tropical cyclone rapid intensification forecast aids designed for the western North Pacific tropical cyclone basin and for use at the Joint Typhoon Warning Center. Two statistical methods, linear discriminant analysis and logistic regression, are used to create probabilistic forecasts for seven intensification thresholds including 25-, 30-, 35-, and 40-kt changes in 24 h, 45- and 55-kt in 36 h, and 70-kt in 48 h (1 kt = 0.514 m s−1). These forecast probabilities are further used to create an equally weighted probability consensus that is then used to trigger deterministic forecasts equal to the intensification thresholds once the probability in the consensus reaches 40%. These deterministic forecasts are incorporated into an operational intensity consensus forecast as additional members, resulting in an improved intensity consensus for these important and difficult to predict cases. Development of these methods is based on the 2000–15 typhoon seasons, and independent performance is assessed using the 2016 and 2017 typhoon seasons. In many cases, the probabilities have skill relative to climatology and adding the rapid intensification deterministic aids to the operational intensity consensus significantly reduces the negative forecast biases.

Full access
Kenneth R. Knapp, John A. Knaff, Charles R. Sampson, Gustavo M. Riggio, and Adam D. Schnapp

Abstract

The western North Pacific Ocean is the most active tropical cyclone (TC) basin. However, recent studies are not conclusive on whether the TC activity is increasing or decreasing, at least when calculations are based on maximum sustained winds. For this study, TC minimum central pressure data are analyzed in an effort to better understand historical typhoons. Best-track pressure reports are compared with aircraft reconnaissance observations; little bias is observed. An analysis of wind and pressure relationships suggests changes in data and practices at numerous agencies over the historical record. New estimates of maximum sustained winds are calculated using recent wind–pressure relationships and parameters from International Best Track Archive for Climate Stewardship (IBTrACS) data. The result suggests potential reclassification of numerous typhoons based on these pressure-based lifetime maximum intensities. Historical documentation supports these new intensities in many cases. In short, wind reports in older best-track data are likely of low quality. The annual activity based on pressure estimates is found to be consistent with aircraft reconnaissance and between agencies; however, reconnaissance ended in the western Pacific in 1987. Since then, interagency differences in maximum wind estimates noted here and by others also exist in the minimum central pressure reports. Reconciling these recent interagency differences is further exasperated by the lack of adequate ground truth. This study suggests efforts to intercalibrate the interagency intensity estimate methods. Conducting an independent and homogeneous reanalysis of past typhoon activity is likely necessary to resolve the remaining discrepancies in typhoon intensity records.

Full access
John A. Knaff, Christopher J. Slocum, Kate D. Musgrave, Charles R. Sampson, and Brian R. Strahl

Abstract

A relatively simple method to estimate tropical cyclone (TC) wind radii from routinely available information including storm data (location, motion, and intensity) and TC size is introduced. The method is based on a combination of techniques presented in previous works and makes an assumption that TCs are largely symmetric and that asymmetries are based solely on storm motion and location. The method was applied to TC size estimates from two sources: infrared satellite imagery and global model analyses. The validation shows that the methodology is comparable with other objective methods based on the error statistics. The technique has a variety of practical research and operational applications, some of which are also discussed.

Full access
Charles R. Sampson, John Kaplan, John A. Knaff, Mark DeMaria, and Chris A. Sisko

Abstract

Rapid intensification (RI) is difficult to forecast, but some progress has been made in developing probabilistic guidance for predicting these events. One such method is the RI index. The RI index is a probabilistic text product available to National Hurricane Center (NHC) forecasters in real time. The RI index gives the probabilities of three intensification rates [25, 30, and 35 kt (24 h)−1; or 12.9, 15.4, and 18.0 m s−1 (24 h)−1] for the 24-h period commencing at the initial forecast time. In this study the authors attempt to develop a deterministic intensity forecast aid from the RI index and, then, implement it as part of a consensus intensity forecast (arithmetic mean of several deterministic intensity forecasts used in operations) that has been shown to generally have lower mean forecast errors than any of its members. The RI aid is constructed using the highest available RI index intensification rate available for probabilities at or above a given probability (i.e., a probability threshold). Results indicate that the higher the probability threshold is, the better the RI aid performs. The RI aid appears to outperform the consensus aids at about the 50% probability threshold. The RI aid also improves forecast errors of operational consensus aids starting with a probability threshold of 30% and reduces negative biases in the forecasts. The authors suggest a 40% threshold for producing the RI aid initially. The 40% threshold is available for approximately 8% of all verifying forecasts, produces approximately 4% reduction in mean forecast errors for the intensity consensus aids, and corrects the negative biases by approximately 15%–20%. In operations, the threshold could be moved up to maximize gains in skill (reducing availability) or moved down to maximize availability (reducing gains in skill).

Full access
Mark DeMaria, Charles R. Sampson, John A. Knaff, and Kate D. Musgrave

The mean absolute error of the official tropical cyclone (TC) intensity forecasts from the National Hurricane Center (NHC) and the Joint Typhoon Warning Center (JTWC) shows limited evidence of improvement over the past two decades. This result has sometimes erroneously been used to conclude that little or no progress has been made in the TC intensity guidance models. This article documents statistically significant improvements in operational TC intensity guidance over the past 24 years (1989–2012) in four tropical cyclone basins (Atlantic, eastern North Pacific, western North Pacific, and Southern Hemisphere). Errors from the best available model have decreased at 1%–2% yr−1 at 24–72 h, with faster improvement rates at 96 and 120 h. Although these rates are only about one-third to one-half of the rates of reduction of the track forecast models, most are statistically significant at the 95% level. These error reductions resulted from improvements in statistical–dynamical intensity models and consensus techniques that combine information from statistical–dynamical and dynamical models. The reason that the official NHC and JTWC intensity forecast errors have decreased slower than the guidance errors is because in the first half of the analyzed period, their subjective forecasts were more accurate than any of the available guidance. It is only in the last decade that the objective intensity guidance has become accurate enough to influence the NHC and JTWC forecast errors.

Full access
Mark DeMaria, John A. Knaff, Richard Knabb, Chris Lauer, Charles R. Sampson, and Robert T. DeMaria

Abstract

The National Hurricane Center (NHC) Hurricane Probability Program (HPP) was implemented in 1983 to estimate the probability that the center of a tropical cyclone would pass within 60 n mi of a set of specified points out to 72 h. Other than periodic updates of the probability distributions, the HPP remained unchanged through 2005. Beginning in 2006, the HPP products were replaced by those from a new program that estimates probabilities of winds of at least 34, 50, and 64 kt, and incorporates uncertainties in the track, intensity, and wind structure forecasts. This paper describes the new probability model and a verification of the operational forecasts from the 2006–07 seasons.

The new probabilities extend to 120 h for all tropical cyclones in the Atlantic and eastern, central, and western North Pacific to 100°E. Because of the interdependence of the track, intensity, and structure forecasts, a Monte Carlo method is used to generate 1000 realizations by randomly sampling from the operational forecast center track and intensity forecast error distributions from the past 5 yr. The extents of the 34-, 50-, and 64-kt winds for the realizations are obtained from a simple wind radii model and its underlying error distributions.

Verification results show that the new probability model is relatively unbiased and skillful as measured by the Brier skill score, where the skill baseline is the deterministic forecast from the operational centers converted to a binary probabilistic forecast. The model probabilities are also well calibrated and have high confidence based on reliability diagrams.

Full access
John A. Knaff, Mark DeMaria, Charles R. Sampson, James E. Peak, James Cummings, and Wayne H. Schubert

Abstract

The upper oceanic temporal response to tropical cyclone (TC) passage is investigated using a 6-yr daily record of data-driven analyses of two measures of upper ocean energy content based on the U.S. Navy’s Coupled Ocean Data Assimilation System and TC best-track records. Composite analyses of these data at points along the TC track are used to investigate the type, magnitude, and persistence of upper ocean response to TC passage, and to infer relationships between routinely available TC information and the upper ocean response. Upper oceanic energy decreases in these metrics are shown to persist for at least 30 days—long enough to possibly affect future TCs. Results also indicate that TC kinetic energy (KE) should be considered when assessing TC impacts on the upper ocean, and that existing TC best-track structure information, which is used here to estimate KE, is sufficient for such endeavors. Analyses also lead to recommendations concerning metrics of upper ocean energy. Finally, parameterizations for the lagged, along-track, upper ocean response to TC passage are developed. These show that the sea surface temperature (SST) is best related to the KE and the latitude whereas the upper ocean energy is a function of KE, initial upper ocean energy conditions, and translation speed. These parameterizations imply that the 10-day lagged SST cooling is approximately 0.7°C for a “typical” TC at 30° latitude, whereas the same storm results in 10-day (30-day) lagged decreases of upper oceanic energy by about 12 (7) kJ cm−2 and a 0.5°C (0.3°C) cooling of the top 100 m of ocean.

Full access