Search Results

You are looking at 1 - 10 of 20 items for

  • Author or Editor: Anthony Argüez x
  • Refine by Access: All Content x
Clear All Modify Search
Anthony Arguez
and
Scott Applequist

Abstract

NOAA released the new 1981–2010 climate normals in July 2011. These included monthly and daily normals of minimum and maximum temperature. Monthly normals were computed from monthly temperature values that were corrected for biases (i.e., homogenized) due to changes in observing practices over the course of the normals period (station moves, changes in observation time, and changes in instrumentation). Daily temperature observations, however, are not homogenized, which could lead to inconsistencies between the daily and monthly normals. This study offers a constrained harmonic technique that forces the daily temperature normals to be consistent with the monthly temperature normals. This approach replaces the cubic spline interpolation of monthly temperature normals that was used to compute earlier versions of NOAA's daily temperature normals. It effectively passes the homogenization applied at the monthly scale down to the daily scale, resulting in a smooth annual cycle devoid of day-to-day sampling variability and intermonth discontinuities.

Full access
Anthony Arguez
and
Russell S. Vose

No Abstract available.

Full access
Richard P. James
and
Anthony Arguez

Abstract

The climatological daily variance of temperature is sometimes estimated from observed temperatures within a centered window of dates. This method overestimates the true variance of daily temperature when the rate of seasonal temperature change is large, because the seasonal change within the date window introduces additional variance. The contribution of the seasonal change may be removed by performing the variance calculation using daily temperature anomalies, leading to a bias-free estimate of variance.

The difference between the variance estimation methods is illustrated using both idealized simulations of temperature variability and observed historical temperature data. The simulation results confirm that removing the climatological temperature cycle eliminates bias in the variance estimates. For several U.S. midlatitude locations, the difference in estimated standard deviation of daily mean temperature is on the order of a few percent near the seasonal peaks in climatological temperature change, but the maximum difference is larger in highly continental climates. These differences are shown to be significant when estimating the probability of temperature extremes under the assumption of a Gaussian distribution.

Full access
Anthony Arguez
,
Peng Yu
, and
James J. O’Brien

Abstract

Time series filtering (e.g., smoothing) can be done in the spectral domain without loss of endpoints. However, filtering is commonly performed in the time domain using convolutions, resulting in lost points near the series endpoints. Multiple incarnations of a least squares minimization approach are developed that retain the endpoint intervals that are normally discarded due to filtering with convolutions in the time domain. The techniques minimize the errors between the predetermined frequency response function (FRF)—a fundamental property of all filters—of interior points with FRFs that are to be determined for each position in the endpoint zone. The least squares techniques are differentiated by their constraints: 1) unconstrained, 2) equal-mean constraint, and 3) an equal-variance constraint. The equal-mean constraint forces the new weights to sum up to the same value as the predetermined weights. The equal-variance constraint forces the new weights to be such that, after convolved with the input values, the expected time series variance is preserved. The three least squares methods are each tested under three separate filtering scenarios [involving Arctic Oscillation (AO), Madden–Julian oscillation (MJO), and El Niño–Southern Oscillation (ENSO) time series] and compared to each other as well as to the spectral filtering method—the standard of comparison. The results indicate that all four methods (including the spectral method) possess skill at determining suitable endpoints estimates. However, both the unconstrained and equal-mean schemes exhibit bias toward zero near the terminal ends due to problems with appropriating variance. The equal-variance method does not show evidence of this attribute and was never the worst performer. The equal-variance method showed great promise in the ENSO project involving a 5-month running mean filter, and performed at least on par with the other realistic methods for almost all time series positions in all three filtering scenarios.

Full access
Anthony Arguez
,
Russell S. Vose
, and
Jenny Dissen
Full access
Anthony Arguez
,
Mark A. Bourassa
, and
James J. O’Brien

Abstract

Wind data from the SeaWinds instrument on NASA’s Quick Scatterometer (QuikSCAT) satellite are investigated to ascertain how well the surface manifestation of the Madden–Julian oscillation (MJO) can be resolved. The MJO signal is detected in nonfiltered gridded data using extended EOF analysis of the zonal wind field, overshadowed by annual, semiannual, and monsoon-related modes. After bandpass filtering with Lanczos weights, MJO signals are clearly detected in several kinematic quantities, including the zonal wind speed, the zonal pseudostress, and the velocity potential. Extraction of the MJO using QuikSCAT winds compares favorably with extraction using NCEP Reanalysis 2, except that the QuikSCAT signal appears to be more robust.

In addition, an alternative bandpass-filtering technique using variable filter weights near time series endpoints is presented. The method uses least squares minimization to match newly created frequency response functions in edge zones as closely as possible to the predetermined frequency response function of interior points. This method stands in contrast to the common practice of simply discarding those endpoints where a convolution cannot be computed.

Full access
Rocky Bilotta
,
Jesse E. Bell
,
Ethan Shepherd
, and
Anthony Arguez

Abstract

The air-freezing index (AFI) is a common metric for determining the freezing severity of the winter season and estimating frost depth for midlatitude regions, which is useful for determining the depth of shallow foundation construction. AFI values represent the seasonal magnitude and duration of below-freezing air temperature. Departures of the daily mean temperature above or below 0°C (32°F) are accumulated over each August–July cold season; the seasonal AFI value is defined as the difference between the highest and lowest extrema points. Return periods are computed using generalized extreme value distribution analysis. This research replaces the methodology used by the National Oceanic and Atmospheric Administration to calculate AFI return periods for the 1951–80 time period, applying the new methodology to the 1981–2010 climate normals period. Seasonal AFI values and return period values were calculated for 5600 stations across the coterminous United States (CONUS), and the results were validated using U.S. Climate Reference Network temperature data. Return period values are typically 14%–18% lower across CONUS during 1981–2010 versus a recomputation of 1951–80 return periods with the new methodology. For the 100-yr (2 yr) return periods, about 59% (83%) of stations show a decrease of more than 10% in the more recent period, whereas 21% (2%) show an increase of more than 10%, indicating a net reduction in winter severity that is consistent with observed climate change.

Full access
Imke Durre
,
Anthony Arguez
,
Carl J. Schreck III
,
Michael F. Squires
, and
Russell S. Vose

Abstract

In this paper, a new set of daily gridded fields and area averages of temperature and precipitation is introduced that covers the contiguous United States (CONUS) from 1951 to present. With daily updates and a grid resolution of approximately 0.0417° (nominally 5 km), the product, named nClimGrid-Daily, is designed to be used particularly in climate monitoring and other applications that rely on placing event-specific meteorological patterns into a long-term historical context. The gridded fields were generated by interpolating morning and midnight observations from the Global Historical Climatology Network–Daily dataset using thin-plate smoothing splines. Additional processing steps limit the adverse effects of spatial and temporal variations in station density, observation time, and other factors on the quality and homogeneity of the fields. The resulting gridded data provide smoothed representations of the point observations, although the accuracy of estimates for individual grid points and days can be sensitive to local spatial variability and the ability of the available observations and interpolation technique to capture that variability. The nClimGrid-Daily dataset is therefore recommended for applications that require the aggregation of estimates in space and/or time, such as climate monitoring analyses at regional to national scales.

Significance Statement

Many applications that use historical weather observations require data on a high-resolution grid that are updated daily. Here, a new dataset of daily temperature and precipitation for 1951–present is introduced that was created by interpolating irregularly spaced observations to a regular grid with a spacing of 0.0417° across the contiguous United States. Compared to other such datasets, this product is particularly suitable for monitoring climate and drought on a daily basis because it was processed so as to limit artificial variations in space and time that may result from changes in the types and distribution of observations used.

Restricted access
Daniel M. Gilford
,
Shawn R. Smith
,
Melissa L. Griffin
, and
Anthony Arguez

Abstract

The daily temperature range (DTR; daily maximum temperature minus daily minimum temperature) at 290 southeastern U.S. stations is examined with respect to the warm and cold phases of the El Niño–Southern Oscillation (ENSO) for the period of 1948–2009. A comparison of El Niño and La Niña DTR distributions during 3-month seasons is conducted using various metrics. Histograms show each station’s particular distribution. To compare directly the normalized distributions of El Niño and La Niña, a new metric (herein called conditional ratio) is produced and results are evaluated for significance at 95% confidence with a bootstrapping technique. Results show that during 3-month winter, spring, and autumn seasons DTRs above 29°F (16.1°C) are significantly more frequent during La Niña events and that DTRs below 15°F (8.3°C) are significantly more frequent during El Niño events. It is hypothesized that these results are associated spatially with cloud cover and storm tracks during each season and ENSO phase. Relationships between DTRs and ENSO-related relative humidity are examined. These results are pertinent to the cattle industry in the Southeast, allowing ranchers to plan for and mitigate threats posed by periods of low DTRs associated with the predicted phase of ENSO.

Full access
Imke Durre
,
Michael F. Squires
,
Russell S. Vose
,
Xungang Yin
,
Anthony Arguez
, and
Scott Applequist

Abstract

The 1981–2010 “U.S. Climate Normals” released by the National Oceanic and Atmospheric Administration’s (NOAA) National Climatic Data Center include a suite of monthly, seasonal, and annual statistics that are based on precipitation, snowfall, and snow-depth measurements. This paper describes the procedures used to calculate the average totals, frequencies of occurrence, and percentiles that constitute these normals. All parameters were calculated from a single, state-of-the-art dataset of daily observations, taking care to produce normals that were as representative as possible of the full 1981–2010 period, even when the underlying data records were incomplete. In the resulting product, average precipitation totals are available at approximately 9300 stations across the United States and parts of the Caribbean Sea and Pacific Ocean islands. Snowfall and snow-depth statistics are provided for approximately 5300 of those stations, as compared with several hundred stations in the 1971–2000 normals. The 1981–2010 statistics exhibit the familiar climatological patterns across the contiguous United States. When compared with the same calculations for 1971–2000, the later period is characterized by a smaller number of days with snow on the ground and less total annual snowfall across much of the contiguous United States; wetter conditions over much of the Great Plains, Midwest, and northern California; and drier conditions over much of the Southeast and Pacific Northwest. These differences are a reflection of the removal of the 1970s and the addition of the 2000s to the 30-yr-normals period as part of this latest revision of the normals.

Full access