Search Results

You are looking at 31 - 37 of 37 items for

  • Author or Editor: James B. Elsner x
  • Refine by Access: All Content x
Clear All Modify Search
Mark C. Bove
,
James B. Elsner
,
Chris W. Landsea
,
Xufeng Niu
, and
James J. O'Brien

Changes in the frequency of U.S. landfalling hurricanes with respect to the El Niño–Southern Oscillation (ENSO) cycle are assessed. Ninety-eight years (1900–97) of U.S. landfalling hurricanes are classified, using sea surface temperature anomaly data from the equatorial Pacific Ocean, as occurring during an El Niño (anomalously warm tropical Pacific waters), La Niña (anomalously cold tropical Pacific waters), or neither (neutral).

The mean and variance of U.S. landfalling hurricanes are determined for each ENSO phase. Each grouping is then tested for Poisson distribution using a chi-squared test. Resampling using a “bootstrap” technique is then used to determine the 5% and 95% confidence limits of the results. Last, the frequency of major U.S. landfalling hurricanes (sustained winds of 96 kt or more) with respect to ENSO phase is assessed empirically.

The results indicated that El Niño events show a reduction in the probability of a U.S. landfalling hurricane, while La Niña shows an increase in the chance of a U.S. hurricane strike. Quantitatively, the probability of two or more landfalling U.S. hurricanes during an El Niño is 28%, of two or more landfalls during neutral conditions is 48%, and of two or more landfalls during La Niña is 66%. The frequencies of landfalling major hurricanes show similar results. The probability of one or more major hurricane landfall during El Niño is 23% but is 58% during neutral conditions and 63% during La Niña.

Full access
James B. Elsner
,
Shawn W. Lewers
,
Jill C. Malmstadt
, and
Thomas H. Jagger

Abstract

The strongest hurricanes over the North Atlantic Ocean are getting stronger, with the increase related to rising ocean temperature. Here, the authors develop a procedure for estimating future wind losses from hurricanes and apply it to Eglin Air Force Base along the northern coast of Florida. The method combines models of the statistical distributions for extreme wind speed and average sea surface temperature over the Gulf of Mexico with dynamical models for tropical cyclone wind fields and damage losses. Results show that the 1-in-100-yr hurricane from the twentieth century picked at random to occur in the year 2100 would result in wind damage that is 36% [(13%, 76%) = 90% confidence interval] greater solely as a consequence of the projected warmer waters in the Gulf of Mexico. The method can be applied elsewhere along the coast with modeling assumptions modified for regional conditions.

Full access
Se-Hwan Yang
,
Nam-Young Kang
,
James B. Elsner
, and
Youngsin Chun

Abstract

The climate of 2015 was characterized by a strong El Niño, global warmth, and record-setting tropical cyclone (TC) intensity for western North Pacific typhoons. In this study, the highest TC intensity in 32 years (1984–2015) is shown to be a consequence of above normal TC activity—following natural internal variation—and greater efficiency of intensity. The efficiency of intensity (EINT) is termed the “blasting” effect and refers to typhoon intensification at the expense of occurrence. Statistical models show that the EINT is mostly due to the anomalous warmth in the environment indicated by global mean sea surface temperature. In comparison, the EINT due to El Niño is negligible. This implies that the record-setting intensity of 2015 might not have occurred without environmental warming and suggests that a year with even greater TC intensity is possible in the near future when above normal activity coincides with another record EINT due to continued multidecadal warming.

Open access
Sarah Strazzo
,
James B. Elsner
,
Timothy LaRow
,
Daniel J. Halperin
, and
Ming Zhao

Abstract

Of broad scientific and public interest is the reliability of global climate models (GCMs) to simulate future regional and local tropical cyclone (TC) occurrences. Atmospheric GCMs are now able to generate vortices resembling actual TCs, but questions remain about their fidelity to observed TCs. Here the authors demonstrate a spatial lattice approach for comparing actual with simulated TC occurrences regionally using observed TCs from the International Best Track Archive for Climate Stewardship (IBTrACS) dataset and GCM-generated TCs from the Geophysical Fluid Dynamics Laboratory (GFDL) High Resolution Atmospheric Model (HiRAM) and Florida State University (FSU) Center for Ocean–Atmospheric Prediction Studies (COAPS) model over the common period 1982–2008. Results show that the spatial distribution of TCs generated by the GFDL model compares well with observations globally, although there are areas of over- and underprediction, particularly in parts of the Pacific Ocean. Difference maps using the spatial lattice highlight these discrepancies. Additionally, comparisons focusing on the North Atlantic Ocean basin are made. Results confirm a large area of overprediction by the FSU COAPS model in the south-central portion of the basin. Relevant to projections of future U.S. hurricane activity is the fact that both models underpredict TC activity in the Gulf of Mexico.

Full access
James B. Elsner
,
Sarah E. Strazzo
,
Thomas H. Jagger
,
Timothy LaRow
, and
Ming Zhao

Abstract

A statistical model for the intensity of the strongest hurricanes has been developed and a new methodology introduced for estimating the sensitivity of the strongest hurricanes to changes in sea surface temperature. Here, the authors use this methodology on observed hurricanes and hurricanes generated from two global climate models (GCMs). Hurricanes over the North Atlantic Ocean during the period 1981–2010 show a sensitivity of 7.9 ± 1.19 m s−1 K−1 (standard error; SE) when over seas warmer than 25°C. In contrast, hurricanes over the same region and period generated from the GFDL High Resolution Atmospheric Model (HiRAM) show a significantly lower sensitivity with the highest at 1.8 ± 0.42 m s−1 K−1 (SE). Similar weaker sensitivity is found using hurricanes generated from the Florida State University Center for Ocean–Atmospheric Prediction Studies (FSU-COAPS) model with the highest at 2.9 ± 2.64 m s−1 K−1 (SE). A statistical refinement of HiRAM-generated hurricane intensities heightens the sensitivity to a maximum of 6.9 ± 3.33 m s−1 K−1 (SE), but the increase is offset by additional uncertainty associated with the refinement. Results suggest that the caution that should be exercised when interpreting GCM scenarios of future hurricane intensity stems from the low sensitivity of limiting GCM-generated hurricane intensity to ocean temperature.

Full access
Nam-Young Kang
,
Myeong-Soon Lim
,
James B. Elsner
, and
Dong-Hyun Shin

Abstract

The accuracy of track forecasts for tropical cyclones (TCs) is well studied, but less attention has been paid to the representation of track-forecast uncertainty. Here, Bayesian updating is employed on the radius of the 70% probability circle using 72-h operational forecasts with comparisons made to the classical approach based on the empirical cumulative density (ECD). Despite an intuitive and efficient way of treating track errors, the ECD approach is statistically less informative than Bayesian updating. Built on a solid statistical foundation, Bayesian updating is shown to be a useful technique that can serve as a substitute for the classical approach in representing operational TC track-forecast uncertainty.

Full access
James B. Elsner
,
Tyler Fricker
,
Holly M. Widen
,
Carla M. Castillo
,
John Humphreys
,
Jihoon Jung
,
Shoumik Rahman
,
Amanda Richard
,
Thomas H. Jagger
,
Tachanat Bhatrasataponkul
,
Christian Gredzens
, and
P. Grady Dixon

Abstract

The statistical relationship between elevation roughness and tornado activity is quantified using a spatial model that controls for the effect of population on the availability of reports. Across a large portion of the central Great Plains the model shows that areas with uniform elevation tend to have more tornadoes on average than areas with variable elevation. The effect amounts to a 2.3% [(1.6%, 3.0%) = 95% credible interval] increase in the rate of a tornado occurrence per meter of decrease in elevation roughness, defined as the highest minus the lowest elevation locally. The effect remains unchanged if the model is fit to the data starting with the year 1995. The effect strengthens for the set of intense tornadoes and is stronger using an alternative definition of roughness. The elevation-roughness effect appears to be strongest over Kansas, but it is statistically significant over a broad domain that extends from Texas to South Dakota. The research is important for developing a local climatological description of tornado occurrence rates across the tornado-prone region of the Great Plains.

Full access