Search Results

You are looking at 1 - 10 of 21 items for :

  • Author or Editor: Russell S. Vose x
  • Refine by Access: All Content x
Clear All Modify Search
Russell S. Vose

Abstract

Set cover models are used to develop two reference station networks that can serve as near-term substitutes (as well as long-term backups) for the recently established Climate Reference Network (CRN) in the United States. The first network contains 135 stations distributed in a relatively uniform fashion in order to match the recommended spatial density for CRN. The second network contains 157 well-distributed stations that are generally not in urban areas in order to minimize the impact of future changes in land use. Both networks accurately reproduce the historical temperature and precipitation variations of the twentieth century.

Full access
Russell S. Vose
and
Matthew J. Menne

Abstract

A procedure is described that provides guidance in determining the number of stations required in a climate observing system deployed to capture temporal variability in the spatial mean of a climate parameter. The method entails reducing the density of an existing station network in a step-by-step fashion and quantifying subnetwork performance at each iteration. Under the assumption that the full network for the study area provides a reasonable estimate of the true spatial mean, this degradation process can be used to quantify the relationship between station density and network performance. The result is a systematic “cost–benefit” relationship that can be used in conjunction with practical constraints to determine the number of stations to deploy.

The approach is demonstrated using temperature and precipitation anomaly data from 4012 stations in the conterminous United States over the period 1971–2000. Results indicate that a U.S. climate observing system should consist of at least 25 quasi-uniformly distributed stations in order to reproduce interannual variability in temperature and precipitation because gains in the calculated performance measures begin to level off with higher station numbers. If trend detection is a high priority, then a higher density network of 135 evenly spaced stations is recommended. Through an analysis of long-term observations from the U.S. Historical Climatology Network, the 135-station solution is shown to exceed the climate monitoring goals of the U.S. Climate Reference Network.

Full access
Laurence S. Kalkstein
,
Paul C. Dunne
, and
Russell S. Vose

Abstract

Studies which utilize a long-term temperature record in determining the possibility of a global warming have led to conflicting results. We suggest that a time-series evaluation of mean annual temperatures is not sufficiently robust to determine the existence of a long-term warming. We propose the utilization of an air mass-based synoptic climatological approach, as it is possible that local changes within particular air masses have been obscured by the gross scale of temperature time-series evaluations used in previous studies of this type. An automated synoptic index was constructed for the winter months in four western North American Arctic locations to determine if the frequency of occurrence of the coldest and mildest air masses has changed and if the physical character of these air masses has shown signs of modification over the past 40 years. It appears that the frequencies of the majority of the coldest air masses have tended to decrease, while those of the warmest air masses have increased. In addition, the very coldest air masses at each site have warmed between 1°C to almost 4°C over the same time interval. A technique is suggested to determine whether these changes are possibly attributable to anthropogenic influences.

Full access
Imke Durre
,
Thomas C. Peterson
, and
Russell S. Vose

Abstract

The effect of the Luers–Eskridge adjustments on the homogeneity of archived radiosonde temperature observations is evaluated. Using unadjusted and adjusted radiosonde data from the Comprehensive Aerological Reference Dataset (CARDS) as well as microwave sounding unit (MSU) version-d monthly temperature anomalies, the discontinuities in differences between radiosonde and MSU temperature anomalies across times of documented changes in radiosonde are computed for the lower to midtroposphere, mid- to upper troposphere, and lower stratosphere. For this purpose, a discontinuity is defined as a statistically significant difference between means of radiosonde–MSU differences for the 30-month periods immediately prior to and following a documented change in radiosonde type. The magnitude and number of discontinuities based on unadjusted and adjusted radiosonde data are then compared. Since the Luers–Eskridge adjustments have been designed to remove radiation and lag errors from radiosonde temperature measurements, the homogeneity of the data should improve whenever these types of errors dominate.

It is found that even though stratospheric radiosonde temperatures appear to be somewhat more homogeneous after the Luers–Eskridge adjustments have been applied, transition-related discontinuities in the troposphere are frequently amplified by the adjustments. Significant discontinuities remain in the adjusted data in all three atmospheric layers. Based on the findings of this study, it appears that the Luers–Eskridge adjustments do not render upper-air temperature records sufficiently homogeneous for climate change analyses. Given that the method was designed to adjust only for radiation and lag errors in radiosonde temperature measurements, its relative ineffectiveness at producing homogeneous time series is likely to be caused by 1) an inaccurate calculation of the radiation or lag errors and/or 2) the presence of other errors in the data that contribute significantly to observed discontinuities in the time series.

Full access
Matthew J. Menne
,
Claude N. Williams Jr.
, and
Russell S. Vose

In support of climate monitoring and assessments, the National Oceanic and Atmospheric Administration's (NOAA's) National Climatic Data Center has developed an improved version of the U.S. Historical Climatology Network temperature dataset (HCN version 2). In this paper, the HCN version 2 temperature data are described in detail, with a focus on the quality-assured data sources and the systematic bias adjustments. The bias adjustments are discussed in the context of their effect on U.S. temperature trends from the period 1895–2007 and in terms of the differences between version 2 and its widely used predecessor (now referred to as HCN version 1). Evidence suggests that the collective effect of changes in observation practice at U.S. HCN stations is systematic and of the same order of magnitude as the background climate signal. For this reason, bias adjustments are essential to reducing the uncertainty in U.S. climate trends. The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum–minimum temperature system (MMTS). With respect to HCN version 1, HCN version 2 trends in maximum temperatures are similar, while minimum temperature trends are somewhat smaller because of 1) an apparent overcorrection in HCN version 1 for the MMTS instrument change and 2) the systematic effect of undocumented station changes, which were not addressed in HCN version 1.

Full access
Imke Durre
,
Matthew J. Menne
, and
Russell S. Vose

Abstract

The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure’s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an “extremes check” for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach described here constitutes a mechanism for reassessing system performance whenever revisions are made following initial development.

Full access
Imke Durre
,
Russell S. Vose
, and
David B. Wuertz

Abstract

This paper presents a description of the fully automated quality-assurance (QA) procedures that are being applied to temperatures in the Integrated Global Radiosonde Archive (IGRA). Because these data are routinely used for monitoring variations in tropospheric temperature, it is of critical importance that the system be able to detect as many errors as possible without falsely identifying true meteorological events as erroneous. Three steps were taken to achieve such robust performance. First, 14 tests for excessive persistence, climatological outliers, and vertical and temporal inconsistencies were developed and arranged into a deliberate sequence so as to render the system capable of detecting a variety of data errors. Second, manual review of random samples of flagged values was used to set the “thresholds” for each individual check so as to minimize the number of valid values that are mistakenly identified as errors. The performance of the system as a whole was also assessed through manual inspection of random samples of the quality-assured data. As a result of these efforts, the IGRA temperature QA procedures effectively remove the grossest errors while maintaining a false-positive rate of approximately 10%.

Full access
Imke Durre
,
Russell S. Vose
, and
David B. Wuertz

Abstract

This paper provides a general description of the Integrated Global Radiosonde Archive (IGRA), a new radiosonde dataset from the National Climatic Data Center (NCDC). IGRA consists of radiosonde and pilot balloon observations at more than 1500 globally distributed stations with varying periods of record, many of which extend from the 1960s to present. Observations include pressure, temperature, geopotential height, dewpoint depression, wind direction, and wind speed at standard, surface, tropopause, and significant levels.

IGRA contains quality-assured data from 11 different sources. Rigorous procedures are employed to ensure proper station identification, eliminate duplicate levels within soundings, and select one sounding for every station, date, and time. The quality assurance algorithms check for format problems, physically implausible values, internal inconsistencies among variables, runs of values across soundings and levels, climatological outliers, and temporal and vertical inconsistencies in temperature. The performance of the various checks was evaluated by careful inspection of selected soundings and time series.

In its final form, IGRA is the largest and most comprehensive dataset of quality-assured radiosonde observations freely available. Its temporal and spatial coverage is most complete over the United States, western Europe, Russia, and Australia. The vertical resolution and extent of soundings improve significantly over time, with nearly three-quarters of all soundings reaching up to at least 100 hPa by 2003. IGRA data are updated on a daily basis and are available online from NCDC as both individual soundings and monthly means.

Full access
Eugene. R. Wahl
,
Henry F. Diaz
,
Russell S. Vose
, and
Wendy S. Gross

Abstract

The recent dryness in California was unprecedented in the instrumental record. This article employs spatially explicit precipitation reconstructions for California in combination with instrumental data to provide perspective on this event since 1571. The period 2012–15 stands out as particularly extreme in the southern Central Valley and south coast regions. which likely experienced unprecedented precipitation deficits over this time, apart from considerations of increasing temperatures and drought metrics that combine temperature and moisture information. Some areas lost more than two years’ average moisture delivery during these four years, and full recovery to long-term average moisture delivery could typically take up to several decades in the hardest-hit areas. These results highlight the value of the additional centuries of information provided by the paleo record, which indicates the shorter instrumental record may underestimate the statewide recovery time by over 30%. The extreme El Niño that occurred in 2015/16 ameliorated recovery in much of the northern half of the state, and since 1571 very-strong-to-extreme El Niños during the first year after a 2012–15-type event reduce statewide recovery times by approximately half. The southern part of California did not experience the high precipitation anticipated, and the multicentury analysis suggests the north-wet–south-dry pattern for such an El Niño was a low-likelihood anomaly. Recent wetness in California motivated evaluation of recovery times when the first two years are relatively wet, suggesting the state is benefiting from a one-in-five (or lower) likelihood situation: the likelihood of full recovery within two years is ~1% in the instrumental data and even lower in the reconstruction data.

Full access
Imke Durre
,
Matthew J. Menne
,
Byron E. Gleason
,
Tamara G. Houston
, and
Russell S. Vose

Abstract

This paper describes a comprehensive set of fully automated quality assurance (QA) procedures for observations of daily surface temperature, precipitation, snowfall, and snow depth. The QA procedures are being applied operationally to the Global Historical Climatology Network (GHCN)-Daily dataset. Since these data are used for analyzing and monitoring variations in extremes, the QA system is designed to detect as many errors as possible while maintaining a low probability of falsely identifying true meteorological events as erroneous. The system consists of 19 carefully evaluated tests that detect duplicate data, climatological outliers, and various inconsistencies (internal, temporal, and spatial). Manual review of random samples of the values flagged as errors is used to set the threshold for each procedure such that its false-positive rate, or fraction of valid values identified as errors, is minimized. In addition, the tests are arranged in a deliberate sequence in which the performance of the later checks is enhanced by the error detection capabilities of the earlier tests. Based on an assessment of each individual check and a final evaluation for each element, the system identifies 3.6 million (0.24%) of the more than 1.5 billion maximum/minimum temperature, precipitation, snowfall, and snow depth values in GHCN-Daily as errors, has a false-positive rate of 1%−2%, and is effective at detecting both the grossest errors as well as more subtle inconsistencies among elements.

Full access