Browse

You are looking at 141 - 150 of 415 items for :

  • Earth Interactions x
  • Refine by Access: All Content x
Clear All
Nancy H. F. French
,
Donald McKenzie
,
Tyler Erickson
,
Benjamin Koziol
,
Michael Billmire
,
K. Arthur Endsley
,
Naomi K. Yager Scheinerman
,
Liza Jenkins
,
Mary Ellen Miller
,
Roger Ottmar
, and
Susan Prichard

Abstract

As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent approach to estimating emissions at landscape, regional, and continental scales. The system taps into data and tools developed by the U.S. Forest Service to describe fuels, fuel loadings, and fuel consumption and merges information from the U.S. Geological Survey (USGS) and National Aeronautics and Space Administration on fire location and timing. Currently, WFEIS provides web access to Moderate Resolution Imaging Spectroradiometer (MODIS) burned area for North America and U.S. fire-perimeter maps from the Monitoring Trends in Burn Severity products from the USGS, overlays them on 1-km fuel maps for the United States, and calculates fuel consumption and emissions with an open-source version of the Consume model. Mapped fuel moisture is derived from daily meteorological data from remote automated weather stations. In addition to tabular output results, WFEIS produces multiple vector and raster formats. This paper provides an overview of the WFEIS system, including the web-based system functionality and datasets used for emissions estimates. WFEIS operates on the web and is built using open-source software components that work with open international standards such as keyhole markup language (KML). Examples of emissions outputs from WFEIS are presented showing that the system provides results that vary widely across the many ecosystems of North America and are consistent with previous emissions modeling estimates and products.

Full access
Diandong Ren
and
Lance M. Leslie

Abstract

As a conveyor belt transferring inland ice to ocean, ice shelves shed mass through large, systematic tabular calving, which also plays a major role in the fluctuation of the buttressing forces. Tabular iceberg calving involves two stages: first is systematic cracking, which develops after the forward-slanting front reaches a limiting extension length determined by gravity–buoyancy imbalance; second is fatigue separation. The latter has greater variability, producing calving irregularity. Whereas ice flow vertical shear determines the timing of the systematic cracking, wave actions are decisive for ensuing viscoplastic fatigue. Because the frontal section has its own resonance frequency, it reverberates only to waves of similar frequency. With a flow-dependent, nonlocal attrition scheme, the present ice model [Scalable Extensible Geoflow Model for Environmental Research-Ice flow submodel (SEGMENT-Ice)] describes an entire ice-shelf life cycle. It is found that most East Antarctic ice shelves have higher resonance frequencies, and the fatigue of viscoplastic ice is significantly enhanced by shoaling waves from both storm surges and infragravity waves (~5 × 10−3 Hz). The two largest embayed ice shelves have resonance frequencies within the range of tsunami waves. When approaching critical extension lengths, perturbations from about four consecutive tsunami events can cause complete separation of tabular icebergs from shelves. For shelves with resonance frequencies matching storm surge waves, future reduction of sea ice may impose much larger deflections from shoaling, storm-generated ocean waves. Although the Ross Ice Shelf (RIS) total mass varies little in the twenty-first century, the mass turnover quickens and the ice conveyor belt is ~40% more efficient by the late twenty-first century, reaching 70 km3 yr−1. The mass distribution shifts oceanward, favoring future tabular calving.

Full access
Scott Curtis
,
Douglas W. Gamble
, and
Jeff Popke

Abstract

This study uses empirical models to examine the potential impact of climate change, based on a range of 100-yr phase 5 of the Coupled Model Intercomparison Project (CMIP5) projections, on crop water need in Jamaica. As expected, crop water need increases with rising temperature and decreasing precipitation, especially in May–July. Comparing the temperature and precipitation impacts on crop water need indicates that the 25th percentile of CMIP5 temperature change (moderate warming) yields a larger crop water deficit than the 75th percentile of CMIP5 precipitation change (wet winter and dry summer), but the 25th percentile of CMIP5 precipitation change (substantial drying) dominates the 75th percentile of CMIP5 temperature change (extreme warming). Over the annual cycle, the warming contributes to larger crop water deficits from November to April, while the drying has a greater influence from May to October. All experiments decrease crop suitability, with the largest impact from March to August.

Full access
M. P. Maneta
and
N. Silverman
Full access
Olivia Kellner
and
Dev Niyogi

Abstract

Land surface heterogeneity affects mesoscale interactions, including the evolution of severe convection. However, its contribution to tornadogenesis is not well known. Indiana is selected as an example to present an assessment of documented tornadoes and land surface heterogeneity to better understand the spatial distribution of tornadoes. This assessment is developed using a GIS framework taking data from 1950 to 2012 and investigates the following topics: temporal analysis, effect of ENSO, antecedent rainfall linkages, population density, land use/land cover, and topography, placing them in the context of land surface heterogeneity.

Spatial analysis of tornado touchdown locations reveals several spatial relationships with regard to cities, population density, land-use classification, and topography. A total of 61% of F0–F5 tornadoes and 43% of F0–F5 tornadoes in Indiana have touched down within 1 km of urban land use and land area classified as forest, respectively, suggesting the possible role of land-use surface roughness on tornado occurrences. The correlation of tornado touchdown points to population density suggests a moderate to strong relationship. A temporal analysis of tornado days shows favored time of day, months, seasons, and active tornado years. Tornado days for 1950–2012 are compared to antecedent rainfall and ENSO phases, which both show no discernible relationship with the average number of annual tornado days. Analysis of tornado touchdowns and topography does not indicate any strong relationship between tornado touchdowns and elevation. Results suggest a possible signature of land surface heterogeneity—particularly that around urban and forested land cover—in tornado climatology.

Full access
Lauren E. Hay
,
Jacob LaFontaine
, and
Steven L. Markstrom

Abstract

The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.

The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.

Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.

Full access
Eugene S. Takle
,
Christopher J. Anderson
,
Jeffrey Andresen
,
James Angel
,
Roger W. Elmore
,
Benjamin M. Gramig
,
Patrick Guinan
,
Steven Hilberg
,
Doug Kluck
,
Raymond Massey
,
Dev Niyogi
,
Jeanne M. Schneider
,
Martha D. Shulski
,
Dennis Todey
, and
Melissa Widhalm

Abstract

Corn is the most widely grown crop in the Americas, with annual production in the United States of approximately 332 million metric tons. Improved climate forecasts, together with climate-related decision tools for corn producers based on these improved forecasts, could substantially reduce uncertainty and increase profitability for corn producers. The purpose of this paper is to acquaint climate information developers, climate information users, and climate researchers with an overview of weather conditions throughout the year that affect corn production as well as forecast content and timing needed by producers. The authors provide a graphic depicting the climate-informed decision cycle, which they call the climate forecast–decision cycle calendar for corn.

Full access
Hal F. Needham
and
Barry D. Keim

Abstract

This paper investigates relationships between storm surge heights and tropical cyclone wind speeds at 3-h increments preceding landfall. A unique dataset containing hourly tropical cyclone position and wind speed is used in conjunction with a comprehensive storm surge dataset that provides maximum water levels for 189 surge events along the U.S. Gulf Coast from 1880 to 2011. A landfall/surge classification was developed for analyzing the relationship between surge magnitudes and prelandfall winds. Ten of the landfall/surge event types provided useable data, producing 117 wind–surge events that were incorporated into this study. Statistical analysis indicates that storm surge heights correlate better with prelandfall tropical cyclone winds than with wind speeds at landfall. Wind speeds 18 h before landfall correlated best with surge heights. Raising wind speeds to exponential powers produced the best wind–surge fit. Higher wind–surge correlations were found when testing a more recent sample of data that contained 63 wind–surge events since 1960. The highest correlation for these data was found when wind speeds 18 h before landfall were raised to a power of 2.2, which provided R 2 values that approached 0.70. The R 2 values at landfall for these same data were only 0.44. Such results will be useful to storm surge modelers, coastal scientists, and emergency management personnel, especially when tropical cyclones rapidly strengthen or weaken while approaching the coast.

Full access
Hal F. Needham
and
Barry D. Keim

Abstract

In the past decade, several large tropical cyclones have generated catastrophic storm surges along the U.S. Gulf and Atlantic Coasts. These storms include Hurricanes Katrina, Ike, Isaac, and Sandy. This study uses empirical analysis of tropical cyclone data and maximum storm surge observations to investigate the role of tropical cyclone size in storm surge generation. Storm surge data are provided by the Storm Surge Database (SURGEDAT), a global storm surge database, while a unique tropical cyclone size dataset built from nine different data sources provides the size of the radius of maximum winds (Rmax) and the radii of 63 (34 kt), 93 (50 kt), and 119 km h−1 (64 kt) winds. Statistical analysis reveals an inverse correlation between storm surge magnitudes and Rmax sizes, while positive correlations exist between storm surge heights and the radius of 63 (34 kt), 93 (50 kt), and 119 km h−1 (64 kt) winds. Storm surge heights correlate best with the prelandfall radius of 93 km h−1 (50 kt) winds, with a Spearman correlation coefficient value of 0.82, significant at the 99.9% confidence level. Many historical examples support these statistical results. For example, the 1900 Galveston hurricane, the 1935 Labor Day hurricane, and Hurricane Camille all had small Rmax sizes but generated catastrophic surges. Hurricane Katrina provides an example of the importance of large wind fields, as hurricane-force winds extending 167 km [90 nautical miles (n mi)] from the center of circulation enabled this large storm to generate a higher storm surge level than Hurricane Camille along the same stretch of coast, even though Camille’s prelandfall winds were slightly stronger than Katrina’s. These results may be useful to the storm surge modeling community, as well as disaster science and emergency management professionals, who will benefit from better understanding the role of tropical cyclone size for storm surge generation.

Full access
Anthony E. Akpan
,
Mahesh Narayanan
, and
T. Harinarayana

Abstract

A constructive back-propagation code that was designed to run as a single-hidden-layer, feed-forward neural network (SLFFNN) has been adapted and used to estimate subsurface temperature from a small volume of magnetotelluric (MT)-derived electrical resistivity data and borehole thermograms. The code was adapted to use a looping procedure in searching for better initialization conditions that can optimally solve nonlinear problems using the random weight initialization approach. Available one-dimensional (1D) MT-derived resistivity data and borehole temperature records from the Tattapani geothermal field, central India, were collated and digitized at 10-m intervals. The two datasets were paired to form a set of input–output pairs. The paired data were randomized, standardized, and partitioned into three mutually exclusive subsets. The various subsets had 52% (later increased to 61%), 30%, and 18% (later reduced to 9%) for training, validation, and testing, respectively, in the first and second training phases. The second training phase was meant to assess the influence of the training data volume on network performance. Standard statistical techniques including adjusted coefficient of determination (R2a), relative error (ɛ), absolute average deviation (AAD), root-mean-square error (RMSE), and regression analysis were used to quantitatively rate network performance. A manually designed two-hidden-layer, feed-forward network with 20 and 15 neurons in the first and second layers was also adopted in solving the same problem. Performance ratings were observed to be 0.97, 3.75, 4.09, 1.41, 1.18, and 1.08 for R2a, AAD, ɛ, RMSE, slope, and intercept, respectively, compared to an ɛ of 20.33 observed with the manually designed network. The SLFFNN is thus a structurally flexible network that performs better in spite of the small volume of data used in testing the network. The network needs to be tested further.

Full access