Search Results
You are looking at 11 - 20 of 37 items for
- Author or Editor: James B. Elsner x
- Refine by Access: All Content x
Abstract
A hierarchical Bayesian strategy for modeling annual U.S. hurricane counts from the period 1851–2000 is illustrated. The approach is based on a separation of the reliable twentieth-century records from the less precise nineteenth-century records and makes use of Poisson regression. The work extends a recent climatological analysis of U.S. hurricanes by including predictors (covariates) in the form of indices for the El Niño–Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). Model integration is achieved through a Markov chain Monte Carlo algorithm. A Bayesian strategy that uses only hurricane counts from the twentieth century together with noninformative priors compares favorably to a traditional (frequentist) approach and confirms a statistical relationship between climate patterns and coastal hurricane activity. Coinciding La Niña and negative NAO conditions significantly increase the probability of a U.S. hurricane. Hurricane counts from the nineteenth century are bootstrapped to obtain informative priors on the model parameters. The earlier records, though less reliable, allow for a more precise description of U.S. hurricane activity. This translates to a greater certainty in the authors' belief about the effects of ENSO and NAO on coastal hurricane activity. Similar conclusions are drawn when annual U.S. hurricane counts are disaggregated into regional counts. Contingent on the availability of values for the covariates, the models can be used to make predictive inferences about the hurricane season.
Abstract
A hierarchical Bayesian strategy for modeling annual U.S. hurricane counts from the period 1851–2000 is illustrated. The approach is based on a separation of the reliable twentieth-century records from the less precise nineteenth-century records and makes use of Poisson regression. The work extends a recent climatological analysis of U.S. hurricanes by including predictors (covariates) in the form of indices for the El Niño–Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). Model integration is achieved through a Markov chain Monte Carlo algorithm. A Bayesian strategy that uses only hurricane counts from the twentieth century together with noninformative priors compares favorably to a traditional (frequentist) approach and confirms a statistical relationship between climate patterns and coastal hurricane activity. Coinciding La Niña and negative NAO conditions significantly increase the probability of a U.S. hurricane. Hurricane counts from the nineteenth century are bootstrapped to obtain informative priors on the model parameters. The earlier records, though less reliable, allow for a more precise description of U.S. hurricane activity. This translates to a greater certainty in the authors' belief about the effects of ENSO and NAO on coastal hurricane activity. Similar conclusions are drawn when annual U.S. hurricane counts are disaggregated into regional counts. Contingent on the availability of values for the covariates, the models can be used to make predictive inferences about the hurricane season.
Abstract
The authors build on their efforts to understand and predict coastal hurricane activity by developing statistical seasonal forecast models that can be used operationally. The modeling strategy uses May–June averaged values representing the North Atlantic Oscillation (NAO), the Southern Oscillation index (SOI), and the Atlantic multidecadal oscillation to predict the probabilities of observing U.S. hurricanes in the months ahead (July–November). The models are developed using a Bayesian approach and make use of data that extend back to 1851 with the earlier hurricane counts (prior to 1899) treated as less certain relative to the later counts. Out-of-sample hindcast skill is assessed using the mean-squared prediction error within a hold-one-out cross-validation exercise. Skill levels are compared to climatology. Predictions show skill above climatology, especially using the NAO + SOI and the NAO-only models. When the springtime NAO values are below normal, there is a heightened risk of U.S. hurricane activity relative to climatology. The preliminary NAO value for 2005 is −0.565 standard deviations so the NAO-only model predicts a 13% increase over climatology of observing three or more U.S. hurricanes.
Abstract
The authors build on their efforts to understand and predict coastal hurricane activity by developing statistical seasonal forecast models that can be used operationally. The modeling strategy uses May–June averaged values representing the North Atlantic Oscillation (NAO), the Southern Oscillation index (SOI), and the Atlantic multidecadal oscillation to predict the probabilities of observing U.S. hurricanes in the months ahead (July–November). The models are developed using a Bayesian approach and make use of data that extend back to 1851 with the earlier hurricane counts (prior to 1899) treated as less certain relative to the later counts. Out-of-sample hindcast skill is assessed using the mean-squared prediction error within a hold-one-out cross-validation exercise. Skill levels are compared to climatology. Predictions show skill above climatology, especially using the NAO + SOI and the NAO-only models. When the springtime NAO values are below normal, there is a heightened risk of U.S. hurricane activity relative to climatology. The preliminary NAO value for 2005 is −0.565 standard deviations so the NAO-only model predicts a 13% increase over climatology of observing three or more U.S. hurricanes.
Abstract
Predictive climate distributions of U.S. landfalling hurricanes are estimated from observational records over the period 1851–2000. The approach is Bayesian, combining the reliable records of hurricane activity during the twentieth century with the less precise accounts of activity during the nineteenth century to produce a best estimate of the posterior distribution on the annual rates. The methodology provides a predictive distribution of future activity that serves as a climatological benchmark. Results are presented for the entire coast as well as for the Gulf Coast, Florida, and the East Coast. Statistics on the observed annual counts of U.S. hurricanes, both for the entire coast and by region, are similar within each of the three consecutive 50-yr periods beginning in 1851. However, evidence indicates that the records during the nineteenth century are less precise. Bayesian theory provides a rational approach for defining hurricane climate that uses all available information and that makes no assumption about whether the 150-yr record of hurricanes has been adequately or uniformly monitored. The analysis shows that the number of major hurricanes expected to reach the U.S. coast over the next 30 yr is 18 and the number of hurricanes expected to hit Florida is 20.
Abstract
Predictive climate distributions of U.S. landfalling hurricanes are estimated from observational records over the period 1851–2000. The approach is Bayesian, combining the reliable records of hurricane activity during the twentieth century with the less precise accounts of activity during the nineteenth century to produce a best estimate of the posterior distribution on the annual rates. The methodology provides a predictive distribution of future activity that serves as a climatological benchmark. Results are presented for the entire coast as well as for the Gulf Coast, Florida, and the East Coast. Statistics on the observed annual counts of U.S. hurricanes, both for the entire coast and by region, are similar within each of the three consecutive 50-yr periods beginning in 1851. However, evidence indicates that the records during the nineteenth century are less precise. Bayesian theory provides a rational approach for defining hurricane climate that uses all available information and that makes no assumption about whether the 150-yr record of hurricanes has been adequately or uniformly monitored. The analysis shows that the number of major hurricanes expected to reach the U.S. coast over the next 30 yr is 18 and the number of hurricanes expected to hit Florida is 20.
Abstract
The rarity of severe coastal hurricanes implies that empirical estimates of extreme wind speed return levels will be unreliable. Here climatology models derived from extreme value theory are estimated using data from the best-track [Hurricane Database (HURDAT)] record. The occurrence of a hurricane above a specified threshold intensity level is assumed to follow a Poisson distribution, and the distribution of the maximum wind is assumed to follow a generalized Pareto distribution. The likelihood function is the product of the generalized Pareto probabilities for each wind speed estimate. A geographic region encompassing the entire U.S. coast vulnerable to Atlantic hurricanes is of primary interest, but the Gulf Coast, Florida, and the East Coast regions are also considered. Model parameters are first estimated using a maximum likelihood (ML) procedure. Results estimate the 100-yr return level for the entire coast at 157 kt (±10 kt), but at 117 kt (±4 kt) for the East Coast region (1 kt = 0.514 m s−1). Highest wind speed return levels are noted along the Gulf Coast from Texas to Alabama. The study also examines how the extreme wind return levels change depending on climate conditions including El Niño–Southern Oscillation, the Atlantic Multidecadal Oscillation, the North Atlantic Oscillation, and global temperature. The mean 5-yr return level during La Niña (El Niño) conditions is 125 (116) kt, but is 140 (164) kt for the 100-yr return level. This indicates that La Niña years are the most active for the occurrence of strong hurricanes, but that extreme hurricanes are more likely during El Niño years. Although El Niño inhibits hurricane formation in part through wind shear, the accompanying cooler lower stratosphere appears to increase the potential intensity of hurricanes that do form. To take advantage of older, less reliable data, the models are reformulated using Bayesian methods. Gibbs sampling is used to integrate the prior over the likelihood to obtain the posterior distributions for the model parameters conditional on global temperature. Higher temperatures are conditionally associated with more strong hurricanes and higher return levels for the strongest hurricane winds. Results compare favorably with an ML approach as well as with recent modeling and observational studies. The maximum possible near-coastal wind speed is estimated to be 208 kt (183 kt) using the Bayesian (ML) approach.
Abstract
The rarity of severe coastal hurricanes implies that empirical estimates of extreme wind speed return levels will be unreliable. Here climatology models derived from extreme value theory are estimated using data from the best-track [Hurricane Database (HURDAT)] record. The occurrence of a hurricane above a specified threshold intensity level is assumed to follow a Poisson distribution, and the distribution of the maximum wind is assumed to follow a generalized Pareto distribution. The likelihood function is the product of the generalized Pareto probabilities for each wind speed estimate. A geographic region encompassing the entire U.S. coast vulnerable to Atlantic hurricanes is of primary interest, but the Gulf Coast, Florida, and the East Coast regions are also considered. Model parameters are first estimated using a maximum likelihood (ML) procedure. Results estimate the 100-yr return level for the entire coast at 157 kt (±10 kt), but at 117 kt (±4 kt) for the East Coast region (1 kt = 0.514 m s−1). Highest wind speed return levels are noted along the Gulf Coast from Texas to Alabama. The study also examines how the extreme wind return levels change depending on climate conditions including El Niño–Southern Oscillation, the Atlantic Multidecadal Oscillation, the North Atlantic Oscillation, and global temperature. The mean 5-yr return level during La Niña (El Niño) conditions is 125 (116) kt, but is 140 (164) kt for the 100-yr return level. This indicates that La Niña years are the most active for the occurrence of strong hurricanes, but that extreme hurricanes are more likely during El Niño years. Although El Niño inhibits hurricane formation in part through wind shear, the accompanying cooler lower stratosphere appears to increase the potential intensity of hurricanes that do form. To take advantage of older, less reliable data, the models are reformulated using Bayesian methods. Gibbs sampling is used to integrate the prior over the likelihood to obtain the posterior distributions for the model parameters conditional on global temperature. Higher temperatures are conditionally associated with more strong hurricanes and higher return levels for the strongest hurricane winds. Results compare favorably with an ML approach as well as with recent modeling and observational studies. The maximum possible near-coastal wind speed is estimated to be 208 kt (183 kt) using the Bayesian (ML) approach.
Abstract
The authors illustrate a statistical model for predicting tornado activity in the central Great Plains by 1 March. The model predicts the number of tornado reports during April–June using February sea surface temperature (SST) data from the Gulf of Alaska (GAK) and the western Caribbean Sea (WCA). The model uses a Bayesian formulation where the likelihood on the counts is a negative binomial distribution and where the nonstationarity in tornado reporting is included as a trend term plus first-order autocorrelation. Posterior densities for the model parameters are generated using the method of integrated nested Laplacian approximation (INLA). The model yields a 51% increase in the number of tornado reports per degree Celsius increase in SST over the WCA and a 15% decrease in the number of reports per degree Celsius increase in SST over the GAK. These significant relationships are broadly consistent with a physical understanding of large-scale atmospheric patterns conducive to severe convective storms across the Great Plains. The SST covariates explain 11% of the out-of-sample variability in observed F1–F5 tornado reports. The paper demonstrates the utility of INLA for fitting Bayesian models to tornado climate data.
Abstract
The authors illustrate a statistical model for predicting tornado activity in the central Great Plains by 1 March. The model predicts the number of tornado reports during April–June using February sea surface temperature (SST) data from the Gulf of Alaska (GAK) and the western Caribbean Sea (WCA). The model uses a Bayesian formulation where the likelihood on the counts is a negative binomial distribution and where the nonstationarity in tornado reporting is included as a trend term plus first-order autocorrelation. Posterior densities for the model parameters are generated using the method of integrated nested Laplacian approximation (INLA). The model yields a 51% increase in the number of tornado reports per degree Celsius increase in SST over the WCA and a 15% decrease in the number of reports per degree Celsius increase in SST over the GAK. These significant relationships are broadly consistent with a physical understanding of large-scale atmospheric patterns conducive to severe convective storms across the Great Plains. The SST covariates explain 11% of the out-of-sample variability in observed F1–F5 tornado reports. The paper demonstrates the utility of INLA for fitting Bayesian models to tornado climate data.
Abstract
Property losses from tornadoes in Florida are estimated by combining a 1-km spatial grid of structural values from the Department of Revenue’s 2014 cadastral database with historical tornado events since 1950. There are 91 180 grid cells in the state with at least some structural value. Total and residential structural values total $942 billion and $619 billion, respectively. Over the period 1950 through 2015 there were 3233 individual tornado reports in the state with a peak frequency during June. The property value exposed to tornadoes is estimated using a geometric model for the path. Annual statewide total and residential structural property exposure to tornadoes is estimated at $171 million and $103 million, respectively. Property exposure to tornadoes peaks in February. A regression model quantifies the relationship between actual losses since 2007 and exposures. A doubling of the residential exposure increases actual recorded losses by 26% since 2007, and a doubling of nonresidential exposure increases losses by 21%, controlling for changes over time. Randomization of the historical tornado paths provides alternative exposure scenarios that are used to determine the probability of extreme loss years. Results from the Monte Carlo algorithm indicate a 1% chance that the annual loss will exceed $430 million and a 0.1% chance that it will exceed $1 billion. These findings, and the procedure to obtain them, should help property insurance and reinsurance companies gauge their risk of losses and prioritize their management actions.
Abstract
Property losses from tornadoes in Florida are estimated by combining a 1-km spatial grid of structural values from the Department of Revenue’s 2014 cadastral database with historical tornado events since 1950. There are 91 180 grid cells in the state with at least some structural value. Total and residential structural values total $942 billion and $619 billion, respectively. Over the period 1950 through 2015 there were 3233 individual tornado reports in the state with a peak frequency during June. The property value exposed to tornadoes is estimated using a geometric model for the path. Annual statewide total and residential structural property exposure to tornadoes is estimated at $171 million and $103 million, respectively. Property exposure to tornadoes peaks in February. A regression model quantifies the relationship between actual losses since 2007 and exposures. A doubling of the residential exposure increases actual recorded losses by 26% since 2007, and a doubling of nonresidential exposure increases losses by 21%, controlling for changes over time. Randomization of the historical tornado paths provides alternative exposure scenarios that are used to determine the probability of extreme loss years. Results from the Monte Carlo algorithm indicate a 1% chance that the annual loss will exceed $430 million and a 0.1% chance that it will exceed $1 billion. These findings, and the procedure to obtain them, should help property insurance and reinsurance companies gauge their risk of losses and prioritize their management actions.
Abstract
The authors develop and apply a model that uses hurricane-experience data in counties along the U.S. hurricane coast to give annual exceedence probabilities to maximum tropical cyclone wind events. The model uses a maximum likelihood estimator to determine a linear regression for the scale and shape parameters of the Weibull distribution for maximum wind speed. Model simulations provide quantiles for the probabilities at prescribed hurricane intensities. When the model is run in the raw climatological mode, median probabilities compare favorably with probabilities from the National Hurricane Center’s risk analysis program “HURISK” model. When the model is run in the conditional climatological mode, covariate information in the form of regression equations for the distributional parameters allows probabilities to be estimated that are conditioned on climate factors. Changes to annual hurricane probabilities with respect to a combined effect of a La Niña event and a negative phase of the North Atlantic oscillation mapped from Texas to North Carolina indicate an increased likelihood of hurricanes along much of the coastline. Largest increases are noted along the central Gulf coast.
Abstract
The authors develop and apply a model that uses hurricane-experience data in counties along the U.S. hurricane coast to give annual exceedence probabilities to maximum tropical cyclone wind events. The model uses a maximum likelihood estimator to determine a linear regression for the scale and shape parameters of the Weibull distribution for maximum wind speed. Model simulations provide quantiles for the probabilities at prescribed hurricane intensities. When the model is run in the raw climatological mode, median probabilities compare favorably with probabilities from the National Hurricane Center’s risk analysis program “HURISK” model. When the model is run in the conditional climatological mode, covariate information in the form of regression equations for the distributional parameters allows probabilities to be estimated that are conditioned on climate factors. Changes to annual hurricane probabilities with respect to a combined effect of a La Niña event and a negative phase of the North Atlantic oscillation mapped from Texas to North Carolina indicate an increased likelihood of hurricanes along much of the coastline. Largest increases are noted along the central Gulf coast.
Abstract
The authors apply a procedure called Bayesian model averaging (BMA) for examining the utility of a set of covariates for predicting the distribution of U.S. hurricane counts and demonstrating a consensus model for seasonal prediction. Hurricane counts are derived from near-coastal tropical cyclones over the period 1866–2008. The covariate set consists of the May–October monthly averages of the Atlantic SST, North Atlantic Oscillation (NAO) index, Southern Oscillation index (SOI), and sunspot number (SSN). BMA produces posterior probabilities indicating the likelihood of the model given the set of annual hurricane counts and covariates. The September SSN covariate appears most often in the higher-probability models. The sign of the September SSN parameter is negative indicating that the probability of a U.S. hurricane decreases with more sunspots. A consensus hindcast for the 2007 and 2008 season is made by averaging forecasts from a large subset of models weighted by their corresponding posterior probability. A cross-validation exercise confirms that BMA can provide more accurate forecasts compared to methods that select a single “best” model. More importantly, the BMA procedure incorporates more of the uncertainty associated with making a prediction of this year’s hurricane activity from data.
Abstract
The authors apply a procedure called Bayesian model averaging (BMA) for examining the utility of a set of covariates for predicting the distribution of U.S. hurricane counts and demonstrating a consensus model for seasonal prediction. Hurricane counts are derived from near-coastal tropical cyclones over the period 1866–2008. The covariate set consists of the May–October monthly averages of the Atlantic SST, North Atlantic Oscillation (NAO) index, Southern Oscillation index (SOI), and sunspot number (SSN). BMA produces posterior probabilities indicating the likelihood of the model given the set of annual hurricane counts and covariates. The September SSN covariate appears most often in the higher-probability models. The sign of the September SSN parameter is negative indicating that the probability of a U.S. hurricane decreases with more sunspots. A consensus hindcast for the 2007 and 2008 season is made by averaging forecasts from a large subset of models weighted by their corresponding posterior probability. A cross-validation exercise confirms that BMA can provide more accurate forecasts compared to methods that select a single “best” model. More importantly, the BMA procedure incorporates more of the uncertainty associated with making a prediction of this year’s hurricane activity from data.
Abstract
A flash flood occurred at Milwaukee, Wisconsin on 6 August 1986 as a result of >6 in. (15.2 cm) of rain, much of it falling over a 2-h period. Several possible contributing factors to the excessive rainfall are addressed, as well as a brief overview of the radar imagery and the local National Weather Service (NWS) forecasts issued during the event.
Conventional weather analyses and infrared satellite imagery are used to describe the synoptic-scale weather patterns and cloud features associated with the flash flood. The synoptic patterns are compared with a meteorological composite for heavy rain-producing weather systems associated with relatively warm-topped cloud signatures imbedded in comma-shaped cloud features, as described by Spayd (1982). This composite is referred to as a cyclonic circulation system (CCS). A comparison between the observed synoptic patterns and those predicted by the operational numerical model forecasts is also discussed. A climatological survey is performed to document the frequency of heavy rainfall events associated with weather systems similar to the CCS composite during seven warm seasons.
Results show that the synoptic weather patterns attending the Milwaukee flood were similar in many respects to the CCS composite. While the numerical models were deficient in accurately predicting rainfall amounts, they were more than adequate in forecasting some of the features of the CCS composite. The climatology shows that weather systems resembling the composite appear infrequently on a given day during the warm season. However, rainfall in excess of 5 in. (12.7 cm) occurred in a preferred location of nearly 60% of the cases in which these systems were identified.
This article lends support to the value of pattern recognition from satellite imagery, conventional weather analysis, and forecast model output to alert forecasters to the potential for heavy rainfall.
Abstract
A flash flood occurred at Milwaukee, Wisconsin on 6 August 1986 as a result of >6 in. (15.2 cm) of rain, much of it falling over a 2-h period. Several possible contributing factors to the excessive rainfall are addressed, as well as a brief overview of the radar imagery and the local National Weather Service (NWS) forecasts issued during the event.
Conventional weather analyses and infrared satellite imagery are used to describe the synoptic-scale weather patterns and cloud features associated with the flash flood. The synoptic patterns are compared with a meteorological composite for heavy rain-producing weather systems associated with relatively warm-topped cloud signatures imbedded in comma-shaped cloud features, as described by Spayd (1982). This composite is referred to as a cyclonic circulation system (CCS). A comparison between the observed synoptic patterns and those predicted by the operational numerical model forecasts is also discussed. A climatological survey is performed to document the frequency of heavy rainfall events associated with weather systems similar to the CCS composite during seven warm seasons.
Results show that the synoptic weather patterns attending the Milwaukee flood were similar in many respects to the CCS composite. While the numerical models were deficient in accurately predicting rainfall amounts, they were more than adequate in forecasting some of the features of the CCS composite. The climatology shows that weather systems resembling the composite appear infrequently on a given day during the warm season. However, rainfall in excess of 5 in. (12.7 cm) occurred in a preferred location of nearly 60% of the cases in which these systems were identified.
This article lends support to the value of pattern recognition from satellite imagery, conventional weather analysis, and forecast model output to alert forecasters to the potential for heavy rainfall.
Abstract
The return-flow of low-level air from the Gulf of Mexico over the southeast United States during the cool season is studied using numerical models. The key models are a newly developed airmass transformation (AMT) model and a one-dimensional planetary boundary layer (PBL) model. Both are employed to examine the thermodynamic structure over and to the north of the Gulf. Model errors for predicting minimum, maximum, and dewpoint temperatures at the surface during both offshore and onshore phases of the return-flow cycle are analyzed. PBL model forecasts indicate soil moisture values obtained from the Eta Model improve accuracy. It is shown that forecasts of maximum temperature for coastal locations are sensitive to the soil moisture used in the PBL model. The AMT model performs well in determining boundary layer parameters since it includes horizontal advective processes. The AMT model is also able to predict the regional differences caused by different surface forcing while passing over land or sea. Results lead to a strategy for making predictions during cool-season return-flow events over and around the Gulf of Mexico.
Abstract
The return-flow of low-level air from the Gulf of Mexico over the southeast United States during the cool season is studied using numerical models. The key models are a newly developed airmass transformation (AMT) model and a one-dimensional planetary boundary layer (PBL) model. Both are employed to examine the thermodynamic structure over and to the north of the Gulf. Model errors for predicting minimum, maximum, and dewpoint temperatures at the surface during both offshore and onshore phases of the return-flow cycle are analyzed. PBL model forecasts indicate soil moisture values obtained from the Eta Model improve accuracy. It is shown that forecasts of maximum temperature for coastal locations are sensitive to the soil moisture used in the PBL model. The AMT model performs well in determining boundary layer parameters since it includes horizontal advective processes. The AMT model is also able to predict the regional differences caused by different surface forcing while passing over land or sea. Results lead to a strategy for making predictions during cool-season return-flow events over and around the Gulf of Mexico.