Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: J Jared Rennie x
  • All content x
Clear All Modify Search
Matthew J. Menne, Claude N. Williams, Byron E. Gleason, J. Jared Rennie, and Jay H. Lawrimore

Abstract

We describe a fourth version of the Global Historical Climatology Network (GHCN)-monthly (GHCNm) temperature dataset. Version 4 (v4) fulfills the goal of aligning GHCNm temperature values with the GHCN-daily dataset and makes use of data from previous versions of GHCNm as well as data collated under the auspices of the International Surface Temperature Initiative. GHCNm v4 has many thousands of additional stations compared to version 3 (v3) both historically and with short time-delay updates. The greater number of stations as well as the use of records with incomplete data during the base period provides for greater global coverage throughout the record compared to earlier versions. Like v3, the monthly averages are screened for random errors and homogenized to address systematic errors. New to v4, uncertainties are calculated for each station series, and regional uncertainties scale directly from the station uncertainties. Correlated errors in the station series are quantified by running the homogenization algorithm as an ensemble. Additional uncertainties associated with incomplete homogenization and use of anomalies are then incorporated into the station ensemble. Further uncertainties are quantified at the regional level, the most important of which is for incomplete spatial coverage. Overall, homogenization has a smaller impact on the v4 global trend compared to v3, though adjustments lead to much greater consistency than between the unadjusted versions. The adjusted v3 global mean therefore falls within the range of uncertainty for v4 adjusted data. Likewise, annual anomaly uncertainties for the other major independent land surface air temperature datasets overlap with GHCNm v4 uncertainties.

Open access
J. Jared Rennie, Michael A. Palecki, Sean P. Heuser, and Howard J. Diamond

Abstract

Extreme heat is one of the most pressing climate risks in the United States and is exacerbated by a warming climate and aging population. Much work in heat health has focused only on temperature-based metrics, which do not fully measure the physiological impact of heat stress on the human body. The U.S. Climate Reference Network (USCRN) consists of 139 sites across the United States and includes meteorological parameters that fully encompass human tolerance to heat, including relative humidity, wind, and solar radiation. Hourly and 5-min observations from USCRN are used to develop heat exposure products, including heat index (HI), apparent temperature (AT), and wet-bulb globe temperature (WBGT). Validation of this product is conducted with nearby airport and mesonet stations, with reanalysis data used to fill in data gaps. Using these derived heat products, two separate analyses are conducted. The first is based on standardized anomalies, which place current heat state in the context of a long-term climate record. In the second study, heat events are classified by time spent at various levels of severity of conditions. There is no consensus as to what defines a heat event, so a comparison of absolute thresholds (i.e., ≥30.0°, 35.0°, and 40.0°C) and relative thresholds (≥90th, 95th, and 98th percentile) will be examined. The efficacy of the product set will be studied using an extreme heat case study in the southeastern United States. While no heat exposure metric is deemed superior, each has their own advantages and caveats, especially in the context of public communication.

Restricted access
Boyin Huang, Matthew J. Menne, Tim Boyer, Eric Freeman, Byron E. Gleason, Jay H. Lawrimore, Chunying Liu, J. Jared Rennie, Carl J. Schreck III, Fengying Sun, Russell Vose, Claude N. Williams, Xungang Yin, and Huai-Min Zhang

Abstract

This analysis estimates uncertainty in the NOAA global surface temperature (GST) version 5 (NOAAGlobalTemp v5) product, which consists of sea surface temperature (SST) from the Extended Reconstructed SST version 5 (ERSSTv5) and land surface air temperature (LSAT) from the Global Historical Climatology Network monthly version 4 (GHCNm v4). Total uncertainty in SST and LSAT consists of parametric and reconstruction uncertainties. The parametric uncertainty represents the dependence of SST/LSAT reconstructions on selecting 28 (6) internal parameters of SST (LSAT), and is estimated by a 1000-member ensemble from 1854 to 2016. The reconstruction uncertainty represents the residual error of using a limited number of 140 (65) modes for SST (LSAT). Uncertainty is quantified at the global scale as well as the local grid scale. Uncertainties in SST and LSAT at the local grid scale are larger in the earlier period (1880s–1910s) and during the two world wars due to sparse observations, then decrease in the modern period (1950s–2010s) due to increased data coverage. Uncertainties in SST and LSAT at the global scale are much smaller than those at the local grid scale due to error cancellations by averaging. Uncertainties are smaller in SST than in LSAT due to smaller SST variabilities. Comparisons show that GST and its uncertainty in NOAAGlobalTemp v5 are comparable to those in other internationally recognized GST products. The differences between NOAAGlobalTemp v5 and other GST products are within their uncertainties at the 95% confidence level.

Open access
Carl J. Schreck III, Stephen Bennett, Jason M. Cordeira, Jake Crouch, Jenny Dissen, Andrea L. Lang, David Margolin, Adam O’Shay, Jared Rennie, Thomas Ian Schneider, and Michael J. Ventrice

Abstract

Day-to-day volatility in natural gas markets is driven largely by variability in heating demand, which is in turn dominated by cool-season temperature anomalies over the northeastern quadrant of the United States (“Midwest–East”). Energy traders rely on temperature forecasts at horizons of 2–4 weeks to anticipate those fluctuations in demand. Forecasts from dynamical models are widely available, so the markets react quickly to changes in the model predictions. Traders often work with meteorologists who leverage teleconnections from the tropics and the Arctic to improve upon the model forecasts. This study demonstrates how natural gas prices react to Midwest–East temperatures using the anomalous winters of 2011/12 and 2013/14. These examples also illustrate how energy meteorologists use teleconnections from the Arctic and the tropics to forecast heating demand.

Winter 2011/12 was exceptionally warm, consistent with the positive Arctic Oscillation (AO). March 2012 was a fitting exclamation point on the winter as it featured the largest warm anomaly for the United States above the twentieth-century climatology of any month since 1895. The resulting lack of heating demand led to record surpluses of natural gas storage and spurred prices downward to an 11-yr low in April 2012. In sharp contrast, winter 2013/14 was unusually cold. An anomalous Alaskan ridge led to cold air being transported from Siberia into the United States, despite the AO generally being positive. The ensuing swell in heating demand exhausted the surplus natural gas inventory, and prices rose to their highest levels since the beginning of the global recession in 2008.

Full access
Christopher C. Hennon, Kenneth R. Knapp, Carl J. Schreck III, Scott E. Stevens, James P. Kossin, Peter W. Thorne, Paula A. Hennon, Michael C. Kruk, Jared Rennie, Jean-Maurice Gadéa, Maximilian Striegl, and Ian Carley

Abstract

The global tropical cyclone (TC) intensity record, even in modern times, is uncertain because the vast majority of storms are only observed remotely. Forecasters determine the maximum wind speed using a patchwork of sporadic observations and remotely sensed data. A popular tool that aids forecasters is the Dvorak technique—a procedural system that estimates the maximum wind based on cloud features in IR and/or visible satellite imagery. Inherently, the application of the Dvorak procedure is open to subjectivity. Heterogeneities are also introduced into the historical record with the evolution of operational procedures, personnel, and observing platforms. These uncertainties impede our ability to identify the relationship between tropical cyclone intensities and, for example, recent climate change.

A global reanalysis of TC intensity using experts is difficult because of the large number of storms. We will show that it is possible to effectively reanalyze the global record using crowdsourcing. Through modifying the Dvorak technique into a series of simple questions that amateurs (“citizen scientists”) can answer on a website, we are working toward developing a new TC dataset that resolves intensity discrepancies in several recent TCs. Preliminary results suggest that the performance of human classifiers in some cases exceeds that of an automated Dvorak technique applied to the same data for times when the storm is transitioning into a hurricane.

Full access