Browse
Abstract
The Ganges–Brahmaputra–Meghna (GBM) river basins exhibit extremes in surface water availability at seasonal to annual time scales. However, because of a lack of basinwide hydrological data from in situ platforms, whether they are real time or historical, water management has been quite challenging for the 630 million inhabitants. Under such circumstances, a large-scale and spatially distributed hydrological model, forced with more widely available satellite meteorological data, can be useful for generating high resolution basinwide hydrological state variable data [streamflow, runoff, and evapotranspiration (ET)] and for decision making on water management. The Variable Infiltration Capacity (VIC) hydrological model was therefore set up for the entire GBM basin at spatial scales ranging from 12.5 to 25 km to generate daily fluxes of surface water availability (runoff and streamflow). Results indicate that, with the selection of representative gridcell size and application of correction factors to evapotranspiration calculation, it is possible to significantly improve streamflow simulation and overcome some of the insufficient sampling and data quality issues in the ungauged basins. Assessment of skill of satellite precipitation forcing datasets revealed that the Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) product of 3B42RT fared comparatively better than the Climate Prediction Center (CPC) morphing technique (CMORPH) product for simulation of streamflow. The general conclusion that emerges from this study is that spatially distributed hydrologic modeling for water management is feasible for the GBM basins under the scenario of inadequate in situ data availability. Satellite precipitation forcing datasets provide the necessary skill for water balance studies at interannual and interseasonal scales. However, further improvement in skill may be required if these datasets are to be used for flood management at daily to weekly time scales and within a data assimilation framework.
Abstract
The Ganges–Brahmaputra–Meghna (GBM) river basins exhibit extremes in surface water availability at seasonal to annual time scales. However, because of a lack of basinwide hydrological data from in situ platforms, whether they are real time or historical, water management has been quite challenging for the 630 million inhabitants. Under such circumstances, a large-scale and spatially distributed hydrological model, forced with more widely available satellite meteorological data, can be useful for generating high resolution basinwide hydrological state variable data [streamflow, runoff, and evapotranspiration (ET)] and for decision making on water management. The Variable Infiltration Capacity (VIC) hydrological model was therefore set up for the entire GBM basin at spatial scales ranging from 12.5 to 25 km to generate daily fluxes of surface water availability (runoff and streamflow). Results indicate that, with the selection of representative gridcell size and application of correction factors to evapotranspiration calculation, it is possible to significantly improve streamflow simulation and overcome some of the insufficient sampling and data quality issues in the ungauged basins. Assessment of skill of satellite precipitation forcing datasets revealed that the Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) product of 3B42RT fared comparatively better than the Climate Prediction Center (CPC) morphing technique (CMORPH) product for simulation of streamflow. The general conclusion that emerges from this study is that spatially distributed hydrologic modeling for water management is feasible for the GBM basins under the scenario of inadequate in situ data availability. Satellite precipitation forcing datasets provide the necessary skill for water balance studies at interannual and interseasonal scales. However, further improvement in skill may be required if these datasets are to be used for flood management at daily to weekly time scales and within a data assimilation framework.
Abstract
This study considers eastern Antilles (11°–18°N, 64°–57°W) weather and climate interactions in the context of the 2013 Christmas storm. This unseasonal event caused flash flooding in Grenada, St. Vincent, St. Lucia, Martinique, and Dominica from 24 to 25 December 2013, despite having winds <15 m s−1. The meteorological scenario and short-term forecasts are analyzed. At the low level, a convective wave propagated westward while near-equatorial upper westerly winds surged with eastward passage of a trough. The combination of tropical moisture, cyclonic vorticity, and uplift resulted in rain rates greater than 30 mm h−1 and many stations reporting 200 mm. Although forecast rainfall was low and a few hours late, weather services posted flood warnings in advance. At the climate scale, the fresh Orinoco River plume brought into the region by the North Brazil Current together with solar radiation greater than 200 W m−2, enabled sea temperatures to reach 28°C, and supplied convective available potential energy greater than 1800 J kg−1. Climate change model simulations are compared with reference fields and trends are analyzed in the eastern Antilles. While temperatures are set to increase, the frequency of flood events appears to decline in the future.
Abstract
This study considers eastern Antilles (11°–18°N, 64°–57°W) weather and climate interactions in the context of the 2013 Christmas storm. This unseasonal event caused flash flooding in Grenada, St. Vincent, St. Lucia, Martinique, and Dominica from 24 to 25 December 2013, despite having winds <15 m s−1. The meteorological scenario and short-term forecasts are analyzed. At the low level, a convective wave propagated westward while near-equatorial upper westerly winds surged with eastward passage of a trough. The combination of tropical moisture, cyclonic vorticity, and uplift resulted in rain rates greater than 30 mm h−1 and many stations reporting 200 mm. Although forecast rainfall was low and a few hours late, weather services posted flood warnings in advance. At the climate scale, the fresh Orinoco River plume brought into the region by the North Brazil Current together with solar radiation greater than 200 W m−2, enabled sea temperatures to reach 28°C, and supplied convective available potential energy greater than 1800 J kg−1. Climate change model simulations are compared with reference fields and trends are analyzed in the eastern Antilles. While temperatures are set to increase, the frequency of flood events appears to decline in the future.
Abstract
As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent approach to estimating emissions at landscape, regional, and continental scales. The system taps into data and tools developed by the U.S. Forest Service to describe fuels, fuel loadings, and fuel consumption and merges information from the U.S. Geological Survey (USGS) and National Aeronautics and Space Administration on fire location and timing. Currently, WFEIS provides web access to Moderate Resolution Imaging Spectroradiometer (MODIS) burned area for North America and U.S. fire-perimeter maps from the Monitoring Trends in Burn Severity products from the USGS, overlays them on 1-km fuel maps for the United States, and calculates fuel consumption and emissions with an open-source version of the Consume model. Mapped fuel moisture is derived from daily meteorological data from remote automated weather stations. In addition to tabular output results, WFEIS produces multiple vector and raster formats. This paper provides an overview of the WFEIS system, including the web-based system functionality and datasets used for emissions estimates. WFEIS operates on the web and is built using open-source software components that work with open international standards such as keyhole markup language (KML). Examples of emissions outputs from WFEIS are presented showing that the system provides results that vary widely across the many ecosystems of North America and are consistent with previous emissions modeling estimates and products.
Abstract
As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent approach to estimating emissions at landscape, regional, and continental scales. The system taps into data and tools developed by the U.S. Forest Service to describe fuels, fuel loadings, and fuel consumption and merges information from the U.S. Geological Survey (USGS) and National Aeronautics and Space Administration on fire location and timing. Currently, WFEIS provides web access to Moderate Resolution Imaging Spectroradiometer (MODIS) burned area for North America and U.S. fire-perimeter maps from the Monitoring Trends in Burn Severity products from the USGS, overlays them on 1-km fuel maps for the United States, and calculates fuel consumption and emissions with an open-source version of the Consume model. Mapped fuel moisture is derived from daily meteorological data from remote automated weather stations. In addition to tabular output results, WFEIS produces multiple vector and raster formats. This paper provides an overview of the WFEIS system, including the web-based system functionality and datasets used for emissions estimates. WFEIS operates on the web and is built using open-source software components that work with open international standards such as keyhole markup language (KML). Examples of emissions outputs from WFEIS are presented showing that the system provides results that vary widely across the many ecosystems of North America and are consistent with previous emissions modeling estimates and products.
Abstract
As a conveyor belt transferring inland ice to ocean, ice shelves shed mass through large, systematic tabular calving, which also plays a major role in the fluctuation of the buttressing forces. Tabular iceberg calving involves two stages: first is systematic cracking, which develops after the forward-slanting front reaches a limiting extension length determined by gravity–buoyancy imbalance; second is fatigue separation. The latter has greater variability, producing calving irregularity. Whereas ice flow vertical shear determines the timing of the systematic cracking, wave actions are decisive for ensuing viscoplastic fatigue. Because the frontal section has its own resonance frequency, it reverberates only to waves of similar frequency. With a flow-dependent, nonlocal attrition scheme, the present ice model [Scalable Extensible Geoflow Model for Environmental Research-Ice flow submodel (SEGMENT-Ice)] describes an entire ice-shelf life cycle. It is found that most East Antarctic ice shelves have higher resonance frequencies, and the fatigue of viscoplastic ice is significantly enhanced by shoaling waves from both storm surges and infragravity waves (~5 × 10−3 Hz). The two largest embayed ice shelves have resonance frequencies within the range of tsunami waves. When approaching critical extension lengths, perturbations from about four consecutive tsunami events can cause complete separation of tabular icebergs from shelves. For shelves with resonance frequencies matching storm surge waves, future reduction of sea ice may impose much larger deflections from shoaling, storm-generated ocean waves. Although the Ross Ice Shelf (RIS) total mass varies little in the twenty-first century, the mass turnover quickens and the ice conveyor belt is ~40% more efficient by the late twenty-first century, reaching 70 km3 yr−1. The mass distribution shifts oceanward, favoring future tabular calving.
Abstract
As a conveyor belt transferring inland ice to ocean, ice shelves shed mass through large, systematic tabular calving, which also plays a major role in the fluctuation of the buttressing forces. Tabular iceberg calving involves two stages: first is systematic cracking, which develops after the forward-slanting front reaches a limiting extension length determined by gravity–buoyancy imbalance; second is fatigue separation. The latter has greater variability, producing calving irregularity. Whereas ice flow vertical shear determines the timing of the systematic cracking, wave actions are decisive for ensuing viscoplastic fatigue. Because the frontal section has its own resonance frequency, it reverberates only to waves of similar frequency. With a flow-dependent, nonlocal attrition scheme, the present ice model [Scalable Extensible Geoflow Model for Environmental Research-Ice flow submodel (SEGMENT-Ice)] describes an entire ice-shelf life cycle. It is found that most East Antarctic ice shelves have higher resonance frequencies, and the fatigue of viscoplastic ice is significantly enhanced by shoaling waves from both storm surges and infragravity waves (~5 × 10−3 Hz). The two largest embayed ice shelves have resonance frequencies within the range of tsunami waves. When approaching critical extension lengths, perturbations from about four consecutive tsunami events can cause complete separation of tabular icebergs from shelves. For shelves with resonance frequencies matching storm surge waves, future reduction of sea ice may impose much larger deflections from shoaling, storm-generated ocean waves. Although the Ross Ice Shelf (RIS) total mass varies little in the twenty-first century, the mass turnover quickens and the ice conveyor belt is ~40% more efficient by the late twenty-first century, reaching 70 km3 yr−1. The mass distribution shifts oceanward, favoring future tabular calving.
Abstract
This study uses empirical models to examine the potential impact of climate change, based on a range of 100-yr phase 5 of the Coupled Model Intercomparison Project (CMIP5) projections, on crop water need in Jamaica. As expected, crop water need increases with rising temperature and decreasing precipitation, especially in May–July. Comparing the temperature and precipitation impacts on crop water need indicates that the 25th percentile of CMIP5 temperature change (moderate warming) yields a larger crop water deficit than the 75th percentile of CMIP5 precipitation change (wet winter and dry summer), but the 25th percentile of CMIP5 precipitation change (substantial drying) dominates the 75th percentile of CMIP5 temperature change (extreme warming). Over the annual cycle, the warming contributes to larger crop water deficits from November to April, while the drying has a greater influence from May to October. All experiments decrease crop suitability, with the largest impact from March to August.
Abstract
This study uses empirical models to examine the potential impact of climate change, based on a range of 100-yr phase 5 of the Coupled Model Intercomparison Project (CMIP5) projections, on crop water need in Jamaica. As expected, crop water need increases with rising temperature and decreasing precipitation, especially in May–July. Comparing the temperature and precipitation impacts on crop water need indicates that the 25th percentile of CMIP5 temperature change (moderate warming) yields a larger crop water deficit than the 75th percentile of CMIP5 precipitation change (wet winter and dry summer), but the 25th percentile of CMIP5 precipitation change (substantial drying) dominates the 75th percentile of CMIP5 temperature change (extreme warming). Over the annual cycle, the warming contributes to larger crop water deficits from November to April, while the drying has a greater influence from May to October. All experiments decrease crop suitability, with the largest impact from March to August.
Abstract
Land surface heterogeneity affects mesoscale interactions, including the evolution of severe convection. However, its contribution to tornadogenesis is not well known. Indiana is selected as an example to present an assessment of documented tornadoes and land surface heterogeneity to better understand the spatial distribution of tornadoes. This assessment is developed using a GIS framework taking data from 1950 to 2012 and investigates the following topics: temporal analysis, effect of ENSO, antecedent rainfall linkages, population density, land use/land cover, and topography, placing them in the context of land surface heterogeneity.
Spatial analysis of tornado touchdown locations reveals several spatial relationships with regard to cities, population density, land-use classification, and topography. A total of 61% of F0–F5 tornadoes and 43% of F0–F5 tornadoes in Indiana have touched down within 1 km of urban land use and land area classified as forest, respectively, suggesting the possible role of land-use surface roughness on tornado occurrences. The correlation of tornado touchdown points to population density suggests a moderate to strong relationship. A temporal analysis of tornado days shows favored time of day, months, seasons, and active tornado years. Tornado days for 1950–2012 are compared to antecedent rainfall and ENSO phases, which both show no discernible relationship with the average number of annual tornado days. Analysis of tornado touchdowns and topography does not indicate any strong relationship between tornado touchdowns and elevation. Results suggest a possible signature of land surface heterogeneity—particularly that around urban and forested land cover—in tornado climatology.
Abstract
Land surface heterogeneity affects mesoscale interactions, including the evolution of severe convection. However, its contribution to tornadogenesis is not well known. Indiana is selected as an example to present an assessment of documented tornadoes and land surface heterogeneity to better understand the spatial distribution of tornadoes. This assessment is developed using a GIS framework taking data from 1950 to 2012 and investigates the following topics: temporal analysis, effect of ENSO, antecedent rainfall linkages, population density, land use/land cover, and topography, placing them in the context of land surface heterogeneity.
Spatial analysis of tornado touchdown locations reveals several spatial relationships with regard to cities, population density, land-use classification, and topography. A total of 61% of F0–F5 tornadoes and 43% of F0–F5 tornadoes in Indiana have touched down within 1 km of urban land use and land area classified as forest, respectively, suggesting the possible role of land-use surface roughness on tornado occurrences. The correlation of tornado touchdown points to population density suggests a moderate to strong relationship. A temporal analysis of tornado days shows favored time of day, months, seasons, and active tornado years. Tornado days for 1950–2012 are compared to antecedent rainfall and ENSO phases, which both show no discernible relationship with the average number of annual tornado days. Analysis of tornado touchdowns and topography does not indicate any strong relationship between tornado touchdowns and elevation. Results suggest a possible signature of land surface heterogeneity—particularly that around urban and forested land cover—in tornado climatology.
Abstract
The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.
The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.
Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.
Abstract
The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.
The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.
Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.
Abstract
Corn is the most widely grown crop in the Americas, with annual production in the United States of approximately 332 million metric tons. Improved climate forecasts, together with climate-related decision tools for corn producers based on these improved forecasts, could substantially reduce uncertainty and increase profitability for corn producers. The purpose of this paper is to acquaint climate information developers, climate information users, and climate researchers with an overview of weather conditions throughout the year that affect corn production as well as forecast content and timing needed by producers. The authors provide a graphic depicting the climate-informed decision cycle, which they call the climate forecast–decision cycle calendar for corn.
Abstract
Corn is the most widely grown crop in the Americas, with annual production in the United States of approximately 332 million metric tons. Improved climate forecasts, together with climate-related decision tools for corn producers based on these improved forecasts, could substantially reduce uncertainty and increase profitability for corn producers. The purpose of this paper is to acquaint climate information developers, climate information users, and climate researchers with an overview of weather conditions throughout the year that affect corn production as well as forecast content and timing needed by producers. The authors provide a graphic depicting the climate-informed decision cycle, which they call the climate forecast–decision cycle calendar for corn.
Abstract
This paper investigates relationships between storm surge heights and tropical cyclone wind speeds at 3-h increments preceding landfall. A unique dataset containing hourly tropical cyclone position and wind speed is used in conjunction with a comprehensive storm surge dataset that provides maximum water levels for 189 surge events along the U.S. Gulf Coast from 1880 to 2011. A landfall/surge classification was developed for analyzing the relationship between surge magnitudes and prelandfall winds. Ten of the landfall/surge event types provided useable data, producing 117 wind–surge events that were incorporated into this study. Statistical analysis indicates that storm surge heights correlate better with prelandfall tropical cyclone winds than with wind speeds at landfall. Wind speeds 18 h before landfall correlated best with surge heights. Raising wind speeds to exponential powers produced the best wind–surge fit. Higher wind–surge correlations were found when testing a more recent sample of data that contained 63 wind–surge events since 1960. The highest correlation for these data was found when wind speeds 18 h before landfall were raised to a power of 2.2, which provided R 2 values that approached 0.70. The R 2 values at landfall for these same data were only 0.44. Such results will be useful to storm surge modelers, coastal scientists, and emergency management personnel, especially when tropical cyclones rapidly strengthen or weaken while approaching the coast.
Abstract
This paper investigates relationships between storm surge heights and tropical cyclone wind speeds at 3-h increments preceding landfall. A unique dataset containing hourly tropical cyclone position and wind speed is used in conjunction with a comprehensive storm surge dataset that provides maximum water levels for 189 surge events along the U.S. Gulf Coast from 1880 to 2011. A landfall/surge classification was developed for analyzing the relationship between surge magnitudes and prelandfall winds. Ten of the landfall/surge event types provided useable data, producing 117 wind–surge events that were incorporated into this study. Statistical analysis indicates that storm surge heights correlate better with prelandfall tropical cyclone winds than with wind speeds at landfall. Wind speeds 18 h before landfall correlated best with surge heights. Raising wind speeds to exponential powers produced the best wind–surge fit. Higher wind–surge correlations were found when testing a more recent sample of data that contained 63 wind–surge events since 1960. The highest correlation for these data was found when wind speeds 18 h before landfall were raised to a power of 2.2, which provided R 2 values that approached 0.70. The R 2 values at landfall for these same data were only 0.44. Such results will be useful to storm surge modelers, coastal scientists, and emergency management personnel, especially when tropical cyclones rapidly strengthen or weaken while approaching the coast.
Abstract
In the past decade, several large tropical cyclones have generated catastrophic storm surges along the U.S. Gulf and Atlantic Coasts. These storms include Hurricanes Katrina, Ike, Isaac, and Sandy. This study uses empirical analysis of tropical cyclone data and maximum storm surge observations to investigate the role of tropical cyclone size in storm surge generation. Storm surge data are provided by the Storm Surge Database (SURGEDAT), a global storm surge database, while a unique tropical cyclone size dataset built from nine different data sources provides the size of the radius of maximum winds (Rmax) and the radii of 63 (34 kt), 93 (50 kt), and 119 km h−1 (64 kt) winds. Statistical analysis reveals an inverse correlation between storm surge magnitudes and Rmax sizes, while positive correlations exist between storm surge heights and the radius of 63 (34 kt), 93 (50 kt), and 119 km h−1 (64 kt) winds. Storm surge heights correlate best with the prelandfall radius of 93 km h−1 (50 kt) winds, with a Spearman correlation coefficient value of 0.82, significant at the 99.9% confidence level. Many historical examples support these statistical results. For example, the 1900 Galveston hurricane, the 1935 Labor Day hurricane, and Hurricane Camille all had small Rmax sizes but generated catastrophic surges. Hurricane Katrina provides an example of the importance of large wind fields, as hurricane-force winds extending 167 km [90 nautical miles (n mi)] from the center of circulation enabled this large storm to generate a higher storm surge level than Hurricane Camille along the same stretch of coast, even though Camille’s prelandfall winds were slightly stronger than Katrina’s. These results may be useful to the storm surge modeling community, as well as disaster science and emergency management professionals, who will benefit from better understanding the role of tropical cyclone size for storm surge generation.
Abstract
In the past decade, several large tropical cyclones have generated catastrophic storm surges along the U.S. Gulf and Atlantic Coasts. These storms include Hurricanes Katrina, Ike, Isaac, and Sandy. This study uses empirical analysis of tropical cyclone data and maximum storm surge observations to investigate the role of tropical cyclone size in storm surge generation. Storm surge data are provided by the Storm Surge Database (SURGEDAT), a global storm surge database, while a unique tropical cyclone size dataset built from nine different data sources provides the size of the radius of maximum winds (Rmax) and the radii of 63 (34 kt), 93 (50 kt), and 119 km h−1 (64 kt) winds. Statistical analysis reveals an inverse correlation between storm surge magnitudes and Rmax sizes, while positive correlations exist between storm surge heights and the radius of 63 (34 kt), 93 (50 kt), and 119 km h−1 (64 kt) winds. Storm surge heights correlate best with the prelandfall radius of 93 km h−1 (50 kt) winds, with a Spearman correlation coefficient value of 0.82, significant at the 99.9% confidence level. Many historical examples support these statistical results. For example, the 1900 Galveston hurricane, the 1935 Labor Day hurricane, and Hurricane Camille all had small Rmax sizes but generated catastrophic surges. Hurricane Katrina provides an example of the importance of large wind fields, as hurricane-force winds extending 167 km [90 nautical miles (n mi)] from the center of circulation enabled this large storm to generate a higher storm surge level than Hurricane Camille along the same stretch of coast, even though Camille’s prelandfall winds were slightly stronger than Katrina’s. These results may be useful to the storm surge modeling community, as well as disaster science and emergency management professionals, who will benefit from better understanding the role of tropical cyclone size for storm surge generation.