Search Results

You are looking at 1 - 10 of 30 items for

  • Author or Editor: Russell S. Vose x
  • Refine by Access: All Content x
Clear All Modify Search
Russell S. Vose

Abstract

Set cover models are used to develop two reference station networks that can serve as near-term substitutes (as well as long-term backups) for the recently established Climate Reference Network (CRN) in the United States. The first network contains 135 stations distributed in a relatively uniform fashion in order to match the recommended spatial density for CRN. The second network contains 157 well-distributed stations that are generally not in urban areas in order to minimize the impact of future changes in land use. Both networks accurately reproduce the historical temperature and precipitation variations of the twentieth century.

Full access
Anthony Arguez
and
Russell S. Vose

No Abstract available.

Full access
Russell S. Vose
and
Matthew J. Menne

Abstract

A procedure is described that provides guidance in determining the number of stations required in a climate observing system deployed to capture temporal variability in the spatial mean of a climate parameter. The method entails reducing the density of an existing station network in a step-by-step fashion and quantifying subnetwork performance at each iteration. Under the assumption that the full network for the study area provides a reasonable estimate of the true spatial mean, this degradation process can be used to quantify the relationship between station density and network performance. The result is a systematic “cost–benefit” relationship that can be used in conjunction with practical constraints to determine the number of stations to deploy.

The approach is demonstrated using temperature and precipitation anomaly data from 4012 stations in the conterminous United States over the period 1971–2000. Results indicate that a U.S. climate observing system should consist of at least 25 quasi-uniformly distributed stations in order to reproduce interannual variability in temperature and precipitation because gains in the calculated performance measures begin to level off with higher station numbers. If trend detection is a high priority, then a higher density network of 135 evenly spaced stations is recommended. Through an analysis of long-term observations from the U.S. Historical Climatology Network, the 135-station solution is shown to exceed the climate monitoring goals of the U.S. Climate Reference Network.

Full access
Anthony Arguez
,
Russell S. Vose
, and
Jenny Dissen
Full access
Thomas C. Peterson
and
Russell S. Vose

The Global Historical Climatology Network version 2 temperature database was released in May 1997. This century-scale dataset consists of monthly surface observations from ~7000 stations from around the world. This archive breaks considerable new ground in the field of global climate databases. The enhancements include 1) data for additional stations to improve regional-scale analyses, particularly in previously data-sparse areas; 2) the addition of maximum–minimum temperature data to provide climate information not available in mean temperature data alone; 3) detailed assessments of data quality to increase the confidence in research results; 4) rigorous and objective homogeneity adjustments to decrease the effect of nonclimatic factors on the time series; 5) detailed metadata (e.g., population, vegetation, topography) that allow more detailed analyses to be conducted; and 6) an infrastructure for updating the archive at regular intervals so that current climatic conditions can constantly be put into historical perspective. This paper describes these enhancements in detail.

Full access
Laurence S. Kalkstein
,
Paul C. Dunne
, and
Russell S. Vose

Abstract

Studies which utilize a long-term temperature record in determining the possibility of a global warming have led to conflicting results. We suggest that a time-series evaluation of mean annual temperatures is not sufficiently robust to determine the existence of a long-term warming. We propose the utilization of an air mass-based synoptic climatological approach, as it is possible that local changes within particular air masses have been obscured by the gross scale of temperature time-series evaluations used in previous studies of this type. An automated synoptic index was constructed for the winter months in four western North American Arctic locations to determine if the frequency of occurrence of the coldest and mildest air masses has changed and if the physical character of these air masses has shown signs of modification over the past 40 years. It appears that the frequencies of the majority of the coldest air masses have tended to decrease, while those of the warmest air masses have increased. In addition, the very coldest air masses at each site have warmed between 1°C to almost 4°C over the same time interval. A technique is suggested to determine whether these changes are possibly attributable to anthropogenic influences.

Full access
Imke Durre
,
Thomas C. Peterson
, and
Russell S. Vose

Abstract

The effect of the Luers–Eskridge adjustments on the homogeneity of archived radiosonde temperature observations is evaluated. Using unadjusted and adjusted radiosonde data from the Comprehensive Aerological Reference Dataset (CARDS) as well as microwave sounding unit (MSU) version-d monthly temperature anomalies, the discontinuities in differences between radiosonde and MSU temperature anomalies across times of documented changes in radiosonde are computed for the lower to midtroposphere, mid- to upper troposphere, and lower stratosphere. For this purpose, a discontinuity is defined as a statistically significant difference between means of radiosonde–MSU differences for the 30-month periods immediately prior to and following a documented change in radiosonde type. The magnitude and number of discontinuities based on unadjusted and adjusted radiosonde data are then compared. Since the Luers–Eskridge adjustments have been designed to remove radiation and lag errors from radiosonde temperature measurements, the homogeneity of the data should improve whenever these types of errors dominate.

It is found that even though stratospheric radiosonde temperatures appear to be somewhat more homogeneous after the Luers–Eskridge adjustments have been applied, transition-related discontinuities in the troposphere are frequently amplified by the adjustments. Significant discontinuities remain in the adjusted data in all three atmospheric layers. Based on the findings of this study, it appears that the Luers–Eskridge adjustments do not render upper-air temperature records sufficiently homogeneous for climate change analyses. Given that the method was designed to adjust only for radiation and lag errors in radiosonde temperature measurements, its relative ineffectiveness at producing homogeneous time series is likely to be caused by 1) an inaccurate calculation of the radiation or lag errors and/or 2) the presence of other errors in the data that contribute significantly to observed discontinuities in the time series.

Full access
Matthew J. Menne
,
Claude N. Williams Jr.
, and
Russell S. Vose

In support of climate monitoring and assessments, the National Oceanic and Atmospheric Administration's (NOAA's) National Climatic Data Center has developed an improved version of the U.S. Historical Climatology Network temperature dataset (HCN version 2). In this paper, the HCN version 2 temperature data are described in detail, with a focus on the quality-assured data sources and the systematic bias adjustments. The bias adjustments are discussed in the context of their effect on U.S. temperature trends from the period 1895–2007 and in terms of the differences between version 2 and its widely used predecessor (now referred to as HCN version 1). Evidence suggests that the collective effect of changes in observation practice at U.S. HCN stations is systematic and of the same order of magnitude as the background climate signal. For this reason, bias adjustments are essential to reducing the uncertainty in U.S. climate trends. The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum–minimum temperature system (MMTS). With respect to HCN version 1, HCN version 2 trends in maximum temperatures are similar, while minimum temperature trends are somewhat smaller because of 1) an apparent overcorrection in HCN version 1 for the MMTS instrument change and 2) the systematic effect of undocumented station changes, which were not addressed in HCN version 1.

Full access
Imke Durre
,
Matthew J. Menne
, and
Russell S. Vose

Abstract

The evaluation strategies outlined in this paper constitute a set of tools beneficial to the development and documentation of robust automated quality assurance (QA) procedures. Traditionally, thresholds for the QA of climate data have been based on target flag rates or statistical confidence limits. However, these approaches do not necessarily quantify a procedure’s effectiveness at detecting true errors in the data. Rather, as illustrated by way of an “extremes check” for daily precipitation totals, information on the performance of a QA test is best obtained through a systematic manual inspection of samples of flagged values combined with a careful analysis of geographical and seasonal patterns of flagged observations. Such an evaluation process not only helps to document the effectiveness of each individual test, but, when applied repeatedly throughout the development process, it also aids in choosing the optimal combination of QA procedures and associated thresholds. In addition, the approach described here constitutes a mechanism for reassessing system performance whenever revisions are made following initial development.

Full access
Imke Durre
,
Russell S. Vose
, and
David B. Wuertz

Abstract

This paper presents a description of the fully automated quality-assurance (QA) procedures that are being applied to temperatures in the Integrated Global Radiosonde Archive (IGRA). Because these data are routinely used for monitoring variations in tropospheric temperature, it is of critical importance that the system be able to detect as many errors as possible without falsely identifying true meteorological events as erroneous. Three steps were taken to achieve such robust performance. First, 14 tests for excessive persistence, climatological outliers, and vertical and temporal inconsistencies were developed and arranged into a deliberate sequence so as to render the system capable of detecting a variety of data errors. Second, manual review of random samples of flagged values was used to set the “thresholds” for each individual check so as to minimize the number of valid values that are mistakenly identified as errors. The performance of the system as a whole was also assessed through manual inspection of random samples of the quality-assured data. As a result of these efforts, the IGRA temperature QA procedures effectively remove the grossest errors while maintaining a false-positive rate of approximately 10%.

Full access