Search Results

You are looking at 91 - 100 of 14,214 items for :

  • Data quality control x
  • All content x
Clear All
Christopher A. Fiebrich and Kenneth C. Crawford

). Throughout COOP’s history, an estimated 25 000 stations have participated in the network ( Reek et al. 1992 ). Currently, the number of COOP observers who measure air temperature across the United States totals between 5000 and 6000 ( Guttman and Quayle 1990 ; Quayle et al. 1991 ; Reek et al. 1992 ; National Research Council 1998 ). Although COOP stations are established, supervised, maintained, and managed by the NWS, COOP data are processed, quality controlled, and archived by the National

Full access
Daniel P. Tyndall, John D. Horel, and Manuel S. F. V. de Pondeca

, was developed for this research to minimize the computational cost of running large numbers of sensitivity experiments using a full analysis system over the entire continental United States (i.e., the RTMA). Since some of the most complex aspects of any data assimilation system are associated with the preprocessing and quality control of the data, the LSA was designed to use the RTMA’s terrain, derived from U.S. Geological Survey (USGS) elevation datasets with the help of the preprocessing

Full access
Yuwei Zhang, Donghai Wang, Panmao Zhai, and Guojun Gu

meteorological variables from observations: Here P (hPa) denotes atmospheric pressure. 2) Correction of radiosonde data Since radiosonde observations are considered “true” in validating satellite measurements, it is essential to make them as accurate as possible. Several procedures are usually applied including defining a range of parameters, setting rules of internal relevance, determining the climate extreme values, testing for time consistency, and performing comprehensive static quality control (CHQC

Full access
Chunlüe Zhou, Junhong Wang, Aiguo Dai, and Peter W. Thorne

) and duplicates (consecutive red dots) were removed. Some data points (green dots) were also excluded in our analysis due to insufficient monthly sampling (see the text for details). Black dots represent subdaily raw temperatures retained in our subsequent analysis. These quality-controlled data were then merged with preference given to IGRA2 to create a comprehensive, global 0000 and 1200 UTC radiosonde temperature dataset at the surface and 16 standard levels, namely 1000, 925, 850, 700, 500, 400

Open access
William R. Moninger, Stanley G. Benjamin, Brian D. Jamison, Thomas W. Schlatter, Tracy Lorraine Smith, and Edward J. Szoke

-to-date 13-km-version code. In February 2006 and subsequently in April 2007, the analysis and model code in the dev–dev2 versions of the RUC used for the TAMDAR impact experiments were upgraded to improve the observation quality control and precipitation physics. These modifications were generally the same as those implemented into the operational NCEP 13-km RUC, with the exception that dev and dev2 do not ingest radar data (implemented in the NCEP RUC in November 2008). The studies herein focus on

Full access
Jeremy D. DeMoss and Kenneth P. Bowman

nearly continuous measurements. To provide the best possible comparisons, the buoy data are matched with TRMM overpasses. The gauge data are time averaged in a 6-h window centered on each TRMM overpass. This time averaging provides near optimal comparison of the two observing systems ( Bowman 2005 ), and the quality of the matches is not strongly dependent on the averaging interval. At some locations, there are substantial gaps in the gauge data; thus, there are usually many fewer matches at each

Full access
Trent W. Ford, Steven M. Quiring, Chen Zhao, Zachary T. Leasor, and Christian Landry

methods (e.g., Dirmeyer et al. 2016 ; Ford and Quiring 2019 ), which found that short soil moisture data records exhibited high variability in temporal stability. Therefore, stations with records shorter than 365 days were not included in this study. All in situ data were acquired in units of volumetric water content θ (m 3 m −3 ), and represent the original data from the networks with no additional quality control. A general overview of each in situ network is included in Table 1 and station

Restricted access
Punpim Puttaraksa Mapiam, Nutchanart Sriwongsitanon, Siriluk Chumchean, and Ashish Sharma

this study have tipping-bucket sizes of 1.0 and 0.5 mm. Because tipping-bucket rain gauges record the time of the tips, they are subject to significant high quantization error at low rainfall intensity ( Chumchean et al. 2003 , 2004 , 2006a , b ). Therefore, only the rainfall amounts that are greater than the volume of that gauge’s tipping bucket were used in this analysis. It should be noted that quality control of these data has been performed by considering rainfall data from adjacent gauges

Full access
J. C. Hubbert

antenna pattern so that power transmitted and received through the antenna sidelobes does not bias the measurement. For the following datasets, S-Pol was in FHV mode with a pulse repetition time (PRT) of 1 ms. Thus, a cross-polar power pair, separated by 1 ms, comes from nearly the identical resolution volume of scatterers, since neither they nor the antenna moves appreciably in 1 ms. To ensure good data quality, several thresholds are used for the CP powers: 20 dB dB (signal-to-noise ratio) and

Full access
Soojin Roh, Marc G. Genton, Mikyoung Jun, Istvan Szunyogh, and Ibrahim Hoteit

1. Introduction In data assimilation, the process of detecting and accounting for observation errors that are statistical outliers is called quality control (QC; e.g., Daley 1991 ). An operational numerical weather prediction system may employ multiple layers of QC. For instance, observations with implausible values are usually rejected even before they enter the data assimilation process. We refer to the algorithms used for such rejection decisions as offline QC algorithms. The fact that

Full access