Search Results

You are looking at 1 - 8 of 8 items for

  • Author or Editor: Jared Rennie x
  • All content x
Clear All Modify Search
Stephen Strader, Jared Rennie, and Becky DePodwin
Open access
Ronald D. Leeper, Jared Rennie, and Michael A. Palecki

Abstract

The U.S. Cooperative Observer Program (COOP) network was formed in the early 1890s to provide daily observations of temperature and precipitation. However, manual observations from naturally aspirated temperature sensors and unshielded precipitation gauges often led to uncertainties in atmospheric measurements. Advancements in observational technology (ventilated temperature sensors, well-shielded precipitation gauges) and measurement techniques (automation and redundant sensors), which improve observation quality, were adopted by NOAA’s National Climatic Data Center (NCDC) into the establishment of the U.S. Climate Reference Network (USCRN). USCRN was designed to provide high-quality and continuous observations to monitor long-term temperature and precipitation trends, and to provide an independent reference to compare to other networks. The purpose of this study is to evaluate how diverse technological and operational choices between the USCRN and COOP programs impact temperature and precipitation observations. Naturally aspirated COOP sensors generally had warmer (+0.48°C) daily maximum and cooler (−0.36°C) minimum temperatures than USCRN, with considerable variability among stations. For precipitation, COOP reported slightly more precipitation overall (1.5%) with network differences varying seasonally. COOP gauges were sensitive to wind biases (no shielding), which are enhanced over winter when COOP observed (10.7%) less precipitation than USCRN. Conversely, wetting factor and gauge evaporation, which dominate in summer, were sources of bias for USCRN, leading to wetter COOP observations over warmer months. Inconsistencies in COOP observations (e.g., multiday observations, time shifts, recording errors) complicated network comparisons and led to unique bias profiles that evolved over time with changes in instrumentation and primary observer.

Full access
Anuradha P. Hewaarachchi, Yingbo Li, Robert Lund, and Jared Rennie

Abstract

This paper develops a method for homogenizing daily temperature series. While daily temperatures are statistically more complex than annual or monthly temperatures, techniques and computational methods have been accumulating that can now model and analyze all salient statistical characteristics of daily temperature series. The goal here is to combine these techniques in an efficient manner for multiple changepoint identification in daily series; computational speed is critical as a century of daily data has over 36 500 data points. The method developed here takes into account 1) metadata, 2) reference series, 3) seasonal cycles, and 4) autocorrelation. Autocorrelation is especially important: ignoring it can degrade changepoint techniques, and sample autocorrelations of day-to-day temperature anomalies are often as large as 0.7. While daily homogenization is not conducted as commonly as monthly or annual homogenization, daily analyses provide greater detection precision as they are roughly 30 times as long as monthly records. For example, it is relatively easy to detect two changepoints less than two years apart with daily data, but virtually impossible to flag these in corresponding annually averaged data. The developed methods are shown to work in simulation studies and applied in the analysis of 46 years of daily temperatures from South Haven, Michigan.

Full access
Matthew J. Menne, Claude N. Williams, Byron E. Gleason, J. Jared Rennie, and Jay H. Lawrimore

Abstract

We describe a fourth version of the Global Historical Climatology Network (GHCN)-monthly (GHCNm) temperature dataset. Version 4 (v4) fulfills the goal of aligning GHCNm temperature values with the GHCN-daily dataset and makes use of data from previous versions of GHCNm as well as data collated under the auspices of the International Surface Temperature Initiative. GHCNm v4 has many thousands of additional stations compared to version 3 (v3) both historically and with short time-delay updates. The greater number of stations as well as the use of records with incomplete data during the base period provides for greater global coverage throughout the record compared to earlier versions. Like v3, the monthly averages are screened for random errors and homogenized to address systematic errors. New to v4, uncertainties are calculated for each station series, and regional uncertainties scale directly from the station uncertainties. Correlated errors in the station series are quantified by running the homogenization algorithm as an ensemble. Additional uncertainties associated with incomplete homogenization and use of anomalies are then incorporated into the station ensemble. Further uncertainties are quantified at the regional level, the most important of which is for incomplete spatial coverage. Overall, homogenization has a smaller impact on the v4 global trend compared to v3, though adjustments lead to much greater consistency than between the unadjusted versions. The adjusted v3 global mean therefore falls within the range of uncertainty for v4 adjusted data. Likewise, annual anomaly uncertainties for the other major independent land surface air temperature datasets overlap with GHCNm v4 uncertainties.

Open access
Jared Rennie, Jesse E. Bell, Kenneth E. Kunkel, Stephanie Herring, Heidi Cullen, and Azar M. Abadi

Abstract

Land surface air temperature products have been essential for monitoring the evolution of the climate system. Before a temperature dataset is included in such analyses, it is important that nonclimatic influences be removed or changed so that the dataset is considered to be homogenous. These inhomogeneities include changes in station location, instrumentation, and observing practices. Many homogenized products exist on the monthly time scale, but few daily and weekly products exist. Recently, a submonthly homogenized dataset has been developed using data and software from NOAA’s National Centers for Environmental Information. Homogeneous daily data are useful for identification and attribution of extreme heat events. Projections of increasing temperatures are expected to result in corresponding increases in the frequency, duration, and intensity of such events. It is also established that heat events can have significant public health impacts, including increases in mortality and morbidity. The method to identify extreme heat events using daily homogeneous temperature data is described and used to develop a climatology of heat event onset, length, and severity. This climatology encompasses nearly 3000 extreme maximum and minimum temperature events across the United States since 1901. A sizeable number of events occurred during the Dust Bowl period of the 1930s; however, trend analysis shows an increase in heat event number and length since 1951. Overnight extreme minimum temperature events are increasing more than daytime maximum temperatures, and regional analysis shows that events are becoming much more prevalent in the western and southeastern parts of the United States.

Free access
Christopher C. Hennon, Kenneth R. Knapp, Carl J. Schreck III, Scott E. Stevens, James P. Kossin, Peter W. Thorne, Paula A. Hennon, Michael C. Kruk, Jared Rennie, Jean-Maurice Gadéa, Maximilian Striegl, and Ian Carley

Abstract

The global tropical cyclone (TC) intensity record, even in modern times, is uncertain because the vast majority of storms are only observed remotely. Forecasters determine the maximum wind speed using a patchwork of sporadic observations and remotely sensed data. A popular tool that aids forecasters is the Dvorak technique—a procedural system that estimates the maximum wind based on cloud features in IR and/or visible satellite imagery. Inherently, the application of the Dvorak procedure is open to subjectivity. Heterogeneities are also introduced into the historical record with the evolution of operational procedures, personnel, and observing platforms. These uncertainties impede our ability to identify the relationship between tropical cyclone intensities and, for example, recent climate change.

A global reanalysis of TC intensity using experts is difficult because of the large number of storms. We will show that it is possible to effectively reanalyze the global record using crowdsourcing. Through modifying the Dvorak technique into a series of simple questions that amateurs (“citizen scientists”) can answer on a website, we are working toward developing a new TC dataset that resolves intensity discrepancies in several recent TCs. Preliminary results suggest that the performance of human classifiers in some cases exceeds that of an automated Dvorak technique applied to the same data for times when the storm is transitioning into a hurricane.

Full access
Boyin Huang, Matthew J. Menne, Tim Boyer, Eric Freeman, Byron E. Gleason, Jay H. Lawrimore, Chunying Liu, J. Jared Rennie, Carl J. Schreck III, Fengying Sun, Russell Vose, Claude N. Williams, Xungang Yin, and Huai-Min Zhang

Abstract

This analysis estimates uncertainty in the NOAA global surface temperature (GST) version 5 (NOAAGlobalTemp v5) product, which consists of sea surface temperature (SST) from the Extended Reconstructed SST version 5 (ERSSTv5) and land surface air temperature (LSAT) from the Global Historical Climatology Network monthly version 4 (GHCNm v4). Total uncertainty in SST and LSAT consists of parametric and reconstruction uncertainties. The parametric uncertainty represents the dependence of SST/LSAT reconstructions on selecting 28 (6) internal parameters of SST (LSAT), and is estimated by a 1000-member ensemble from 1854 to 2016. The reconstruction uncertainty represents the residual error of using a limited number of 140 (65) modes for SST (LSAT). Uncertainty is quantified at the global scale as well as the local grid scale. Uncertainties in SST and LSAT at the local grid scale are larger in the earlier period (1880s–1910s) and during the two world wars due to sparse observations, then decrease in the modern period (1950s–2010s) due to increased data coverage. Uncertainties in SST and LSAT at the global scale are much smaller than those at the local grid scale due to error cancellations by averaging. Uncertainties are smaller in SST than in LSAT due to smaller SST variabilities. Comparisons show that GST and its uncertainty in NOAAGlobalTemp v5 are comparable to those in other internationally recognized GST products. The differences between NOAAGlobalTemp v5 and other GST products are within their uncertainties at the 95% confidence level.

Open access
Carl J. Schreck III, Stephen Bennett, Jason M. Cordeira, Jake Crouch, Jenny Dissen, Andrea L. Lang, David Margolin, Adam O’Shay, Jared Rennie, Thomas Ian Schneider, and Michael J. Ventrice

Abstract

Day-to-day volatility in natural gas markets is driven largely by variability in heating demand, which is in turn dominated by cool-season temperature anomalies over the northeastern quadrant of the United States (“Midwest–East”). Energy traders rely on temperature forecasts at horizons of 2–4 weeks to anticipate those fluctuations in demand. Forecasts from dynamical models are widely available, so the markets react quickly to changes in the model predictions. Traders often work with meteorologists who leverage teleconnections from the tropics and the Arctic to improve upon the model forecasts. This study demonstrates how natural gas prices react to Midwest–East temperatures using the anomalous winters of 2011/12 and 2013/14. These examples also illustrate how energy meteorologists use teleconnections from the Arctic and the tropics to forecast heating demand.

Winter 2011/12 was exceptionally warm, consistent with the positive Arctic Oscillation (AO). March 2012 was a fitting exclamation point on the winter as it featured the largest warm anomaly for the United States above the twentieth-century climatology of any month since 1895. The resulting lack of heating demand led to record surpluses of natural gas storage and spurred prices downward to an 11-yr low in April 2012. In sharp contrast, winter 2013/14 was unusually cold. An anomalous Alaskan ridge led to cold air being transported from Siberia into the United States, despite the AO generally being positive. The ensuing swell in heating demand exhausted the surplus natural gas inventory, and prices rose to their highest levels since the beginning of the global recession in 2008.

Full access