• Alexander, C., and Coauthors, 2012: Evaluation of High Resolution Rapid Refresh (HRRR) model changes and forecasts during 2011. Preprints, Third Aviation, Range, and Aerospace Meteorology Special Symp. on Weather-Air Traffic Management Integration, New Orleans, LA, Amer. Meteor. Soc., P545. [Available online at https://ams.confex.com/ams/92Annual/webprogram/Paper200890.html.]

  • Baker, N. L., , and Daley R. , 2000: Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 14311454.

    • Search Google Scholar
    • Export Citation
  • Benjamin, S., and Coauthors, 2004: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132, 495518.

  • Benjamin, S., , Jamison B. D. , , Moninger W. R. , , Sahm S. R. , , Schwartz B. E. , , and Schlatter T. W. , 2010: Relative short-range forecast impact from aircraft, profiler, radiosonde, VAD, GPS-PW, METAR, and mesonet observations via the RUC hourly assimilation cycle. Mon. Wea. Rev., 138, 13191343.

    • Search Google Scholar
    • Export Citation
  • Brown, J., and Coauthors, 2012: Rapid Refresh replaces the Rapid Update Cycle at NCEP. Preprints, 2012 Canadian Meteorological and Oceanographic Society Congress/21st Conf. on Numerical Weather Prediction/25th Conf. on Weather and Forecasting, Montreal, QC, Canada, CMOS and Amer. Meteor. Soc., 3B1.2. [Available online at https://www1.cmos.ca/abstracts/abstract_print_view.asp?absId=5721.]

  • Buizza, R., , and Montani A. , 1999: Targeting observations using singular vectors. J. Atmos. Sci., 56, 29652985.

  • Cohn, S., , da Silva A. , , Guo J. , , Sienkiewicz M. , , and Lamich D. , 1998: Assessing the effects of data selection with the DAO Physical-space Statistical Analysis System. Mon. Wea. Rev., 126, 29132926.

    • Search Google Scholar
    • Export Citation
  • Dabberdt, W. F., and Coauthors, 2005: Multifunctional mesoscale observing networks. Bull. Amer. Meteor. Soc., 86, 961982.

  • Daley, R., , and Barker E. , 2001: NAVDAS: Formulation and diagnostics. Mon. Wea. Rev., 129, 869883.

  • Dee, D. P., 2005: Bias and data assimilation. Quart. J. Roy. Meteor. Soc., 131, 33233343.

  • de Pondeca, M. S. F. V., and Coauthors, 2011: The real-time mesoscale analysis at NOAA’s National Centers for Environmental Prediction: Current status and development. Wea. Forecasting, 26, 593612.

    • Search Google Scholar
    • Export Citation
  • Fiebrich, C. A., , Morgan C. R. , , McCombs A. G. , , Hall P. K. , , and McPherson R. A. , 2010: Quality assurance procedures for mesoscale meteorological data. J. Atmos. Oceanic Technol., 27, 15651582.

    • Search Google Scholar
    • Export Citation
  • Glowacki, T. J., , Xiao Y. , , and Steinle P. , 2012: Mesoscale surface analysis system for the Australian domain: Design issues, development status, and system validation. Wea. Forecasting, 27, 141157.

    • Search Google Scholar
    • Export Citation
  • Haiden, T., , Kann A. , , Wittmann C. , , Pistotnik G. , , Bica B. , , and Gruber C. , 2011: The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the eastern Alpine region. Wea. Forecasting, 26, 166183.

    • Search Google Scholar
    • Export Citation
  • Horel, J., , and Colman B. , 2005: Real-time and retrospective mesoscale objective analyses. Bull. Amer. Meteor. Soc., 86, 14771480.

  • Horel, J., , and Dong X. , 2010: An evaluation of the distribution of Remote Automated Weather Stations (RAWS). J. Appl. Meteor. Climatol., 49, 15631578.

    • Search Google Scholar
    • Export Citation
  • Horel, J., and Coauthors, 2002: Mesowest: Cooperative mesonets in the western United States. Bull. Amer. Meteor. Soc., 83, 211225.

  • Langland, R. H., , and Baker N. L. , 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189201.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., , Gelaro R. , , Rohaly G. D. , , and Shapiro M. A. , 1999: Targeted observations in FASTEX: Adjoint-based targeting procedures and data impact experiments in IOP17 and IOP18. Quart. J. Roy. Meteor. Soc., 125, 32413270.

    • Search Google Scholar
    • Export Citation
  • Lönnberg, P., , and Hollingsworth A. , 1986: The statistical structure of short-range forecast errors as determined from radiosonde data. Part II: The covariance of height and wind errors. Tellus, 38A, 137161.

    • Search Google Scholar
    • Export Citation
  • Lorenc, A. C., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc., 112, 11771194.

  • McPherson, R. A., and Coauthors, 2007: Statewide monitoring of the mesoscale environment: A technical update on the Oklahoma Mesonet. J. Atmos. Oceanic Technol., 24, 301321.

    • Search Google Scholar
    • Export Citation
  • Myrick, D. T., , and Horel J. D. , 2008: Sensitivity of surface analyses over the western United States to RAWS observations. Wea. Forecasting, 23, 145158.

    • Search Google Scholar
    • Export Citation
  • National Academy of Sciences, 2009: Observing Weather and Climate from the Ground Up: A Nationwide Network of Networks. National Academy Press, 234 pp.

  • National Academy of Sciences, 2010: When Weather Matters: Science and Service to Meet Critical Societal Needs. National Academy Press, 199 pp.

  • Palmer, T. N., , Gelaro R. , , Barkmeijer J. , , and Buizza R. , 1998: Singular vectors, metrics, and adaptive observations. J. Atmos. Sci., 55, 633653.

    • Search Google Scholar
    • Export Citation
  • Seaman, R. S., , and Hutchinson M. F. , 1985: Comparative real data test of some objective analysis methods by withholding observations. Aust. Meteor. Mag., 33, 3746.

    • Search Google Scholar
    • Export Citation
  • Steinacker, R., and Coauthors, 2006: A mesoscale data analysis and downscaling method over complex terrain. Mon. Wea. Rev., 134, 27582771.

    • Search Google Scholar
    • Export Citation
  • Tyndall, D. P., 2011: Sensitivity of surface meteorological analyses to observation networks. Ph.D. dissertation, University of Utah, 155 pp. [Available online at http://content.lib.utah.edu/cdm/ref/collection/etd3/id/314.]

  • Tyndall, D. P., , Horel J. D. , , and de Pondeca M. S. F. V. , 2010: Sensitivity of surface air temperature analyses to background and observation errors. Wea. Forecasting, 25, 852865.

    • Search Google Scholar
    • Export Citation
  • Uboldi, F., , Lussana C. , , and Salvati M. , 2008: Three-dimensional spatial interpolation of surface meteorological observations from high-resolution local networks. Meteor. Appl., 15, 331345.

    • Search Google Scholar
    • Export Citation
  • Xie, Y., , Koch S. , , McGinley J. , , Albers S. , , Bieringer P. E. , , Wolfson M. , , and Chan M. , 2011: A space–time multiscale analysis system: A sequential variational analysis approach. Mon. Wea. Rev., 139, 12241240.

    • Search Google Scholar
    • Export Citation
  • Zapotocny, T. H., and Coauthors, 2000: A case study of the sensitivity of the Eta Data Assimilation System. Wea. Forecasting, 15, 603621.

    • Search Google Scholar
    • Export Citation
  • Zhu, Y., , and Gelaro R. , 2008: Observation sensitivity calculations using the adjoint of the Gridpoint Statistical Interpolation (GSI) analysis system. Mon. Wea. Rev., 136, 335351.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    IDI analysis over the CONUS domain for surface observations that reported at least 50% of the 100 analysis hours used in this study.

  • View in gallery

    Hydrometeorological Prediction Center 0000 UTC 4 Apr 2011 surface analysis showing positions of frontal structures and dryline over CONUS.

  • View in gallery

    (a),(c),(e) Meteorological surface analyses and (b),(d),(f) analysis increments for 0000 UTC 4 Apr 2011 over CONUS.

  • View in gallery

    Observation impact percentile for 2-m temperature observations used in the 0000 UTC 4 Apr 2010 temperature analysis from four network categories.

  • View in gallery

    Median impact percentiles for temperature observations computed over 100 analysis hours for four selected network categories.

  • View in gallery

    As in Fig. 5, but for dewpoint temperature.

  • View in gallery

    As in Fig. 5, but for wind speed.

  • View in gallery

    As in Fig. 5, but for northern UT for all 10 network categories. The 5-km analysis terrain (m) is shaded, with blue areas denoting water grid points. Location of the aviation routine weather report (METAR) observation at Salt Lake International Airport is marked in the EXT panel.

  • View in gallery

    As in Fig. 8, but for wind speed observations over southern CA. Locations of the METAR observations at Los Angeles and San Diego International Airports are marked in the FED+ panel.

  • View in gallery

    Fraction of reports from each network category with observation impacts in the upper quartile over 100 analysis hours for temperature, dewpoint, and wind speed. The assumed observation to background error variance ratio (see Table 1) is labeled at the top of each bar.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 38 38 9
PDF Downloads 41 41 6

Impacts of Mesonet Observations on Meteorological Surface Analyses

View More View Less
  • 1 Department of Atmospheric Sciences, University of Utah, Salt Lake City, Utah
© Get Permissions
Full access

Abstract

Given the heterogeneous equipment, maintenance and reporting practices, and siting of surface observing stations, subjective decisions that depend on the application tend to be made to use some observations and to avoid others. This research determines objectively high-impact surface observations of 2-m temperature, 2-m dewpoint, and 10-m wind observations using the adjoint of a two-dimensional variational surface analysis over the contiguous United States. The analyses reflect a weighted blend of 1-h numerical forecasts used as background grids and available observations. High-impact observations are defined as arising from poor observation quality, observation representativeness errors, or accurate observed weather conditions not evident in the background field. The impact of nearly 20 000 surface observations is computed over a sample of 100 analysis hours during 25 major weather events. Observation impacts are determined for each station as well as within broad network categories. For individual analysis hours, high-impact observations are located in regions of significant weather—typically, where the background field fails to define the local weather conditions. Low-impact observations tend to be ones where there are many observations reporting similar departures from the background. When averaged over the entire 100 cases, observations with the highest impact are found within all network categories and depend strongly on their location relative to other observing sites and the amount of variability in the weather; for example, temperature observations have reduced impact in urban areas such as Los Angeles, California, where observations are plentiful and temperature departures from the background grids are small.

Current affiliation: National Research Council Postdoctoral Fellow, Monterey, California.

Corresponding author address: Daniel P. Tyndall, Naval Research Laboratory, 7 Grace Hopper Ave., Monterey, CA 93943. E-mail: dan.tyndall.ctr@nrlmry.navy.mil

Abstract

Given the heterogeneous equipment, maintenance and reporting practices, and siting of surface observing stations, subjective decisions that depend on the application tend to be made to use some observations and to avoid others. This research determines objectively high-impact surface observations of 2-m temperature, 2-m dewpoint, and 10-m wind observations using the adjoint of a two-dimensional variational surface analysis over the contiguous United States. The analyses reflect a weighted blend of 1-h numerical forecasts used as background grids and available observations. High-impact observations are defined as arising from poor observation quality, observation representativeness errors, or accurate observed weather conditions not evident in the background field. The impact of nearly 20 000 surface observations is computed over a sample of 100 analysis hours during 25 major weather events. Observation impacts are determined for each station as well as within broad network categories. For individual analysis hours, high-impact observations are located in regions of significant weather—typically, where the background field fails to define the local weather conditions. Low-impact observations tend to be ones where there are many observations reporting similar departures from the background. When averaged over the entire 100 cases, observations with the highest impact are found within all network categories and depend strongly on their location relative to other observing sites and the amount of variability in the weather; for example, temperature observations have reduced impact in urban areas such as Los Angeles, California, where observations are plentiful and temperature departures from the background grids are small.

Current affiliation: National Research Council Postdoctoral Fellow, Monterey, California.

Corresponding author address: Daniel P. Tyndall, Naval Research Laboratory, 7 Grace Hopper Ave., Monterey, CA 93943. E-mail: dan.tyndall.ctr@nrlmry.navy.mil

1. Introduction

Mesoscale surface observations are vital data sources for applications in many different meteorological subfields, including operational forecasting, wind power management, transportation safety, wildfire management, dispersion modeling, and defense applications (Dabberdt et al. 2005; Horel and Colman 2005). Two recent reports (National Academy of Sciences 2009, 2010) recommend that existing and future mesoscale observations be integrated into a network of networks. The heterogeneous nature of the available mesoscale surface observing networks within the United States (e.g., varying sensor quality, maintenance and reporting practices, and siting) can limit their potential benefits. A critical recommendation in both reports is to improve the metadata that define the sensor and station characteristics within the aggregated networks. Users of the national network would then be able to select the types of stations that meet their specific needs.

Obviously, stations with higher quality equipment that are properly sited and maintained are likely to be of greater value for all applications. However, does a low-cost station with lower quality standards in a largely data-void region have greater value than one of several expensive high quality stations located close to each other? Observation value is clearly a function of the observation’s total cost, the availability of comparable data nearby, and its benefit to diverse potential applications, including use by human forecasters and its integration into atmospheric analyses and numerical forecasts.

This study does not attempt to address the broader issues of the relative value of observations obtained from different networks. Rather, the scope is limited to assessing the utility of an objective metric to identify the characteristics of observations and networks that strongly influence mesoscale surface analyses over the contiguous United States (CONUS). We address the extent to which the impact of observations depends “dynamically” on the synoptic, mesoscale, and local weather situation relative to the “static” underlying siting and standards of the various observing networks. This work is motivated by the pressing need to develop automated quality control procedures for mesonet observations from heterogeneous sources.

Domestic and international research efforts have led to improved mesoscale analyses and data assimilation systems. Current operational products include MatchObsAll, which is used at many National Weather Service offices; Vienna Enhanced Resolution Analysis (VERA; Steinacker et al. 2006); Integrated Nowcasting through Comprehensive Analysis (INCA; Haiden et al. 2011); Mesoscale Surface Analysis System (MSAS; Glowacki et al. 2012); Space and Time Multiscale Analysis System (STMAS; Xie et al. 2011); Real-Time Mesoscale Analysis (RTMA; de Pondeca et al. 2011), and Rapid Refresh (Brown et al. 2012) and High Resolution Rapid Refresh Systems (Alexander et al. 2012) of the National Centers for Environmental Prediction. These systems rely on analysis techniques ranging from Cressman and spline interpolation methods to advanced variational techniques. Many of these high-resolution (1–12 km) analyses are not intended to initialize subsequent model forecasts, but instead are meant to be used as end products to help diagnose current conditions or verify prior forecasts. Examination of sequences of such surface analyses allows users to grasp the spatial and temporal variability of weather situations more readily than direct inspection of conditions at hundreds of specific observing sites. We will introduce the University of Utah Variational Surface Analysis (UU2DVar) in this paper as an efficient research tool comparable in its characteristics and performance to these operational systems.

Prior work has demonstrated that the near-surface boundary layer in the CONUS remains undersampled (Myrick and Horel 2008; Horel and Dong 2010). Figure 1 illustrates the need for additional observing capabilities, particularly in mountainous and coastal areas, in terms of an integrated data influence analysis (IDI; Uboldi et al. 2008). The IDI analysis is generated by assuming that the background grid values are 0 everywhere and setting the observation values at the station locations available in this study to 1. As discussed by Horel and Dong (2010), the IDI analysis is a measure of station density and depends on the assumed observation and background error covariances. The IDI analysis of Fig. 1 follows assumptions for those covariances (e.g., equal error variance for observations and background) discussed later and results in an IDI value of 0.5 near an isolated observation (i.e., the observation and background value receive equal weight there). Areas in Fig. 1 with many stations and enhanced data coverage (few observations) have IDI values much larger (smaller) than 0.5.

Fig. 1.
Fig. 1.

IDI analysis over the CONUS domain for surface observations that reported at least 50% of the 100 analysis hours used in this study.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

Figure 1 should be viewed as a static illustration of the data density. An underlying assumption of the recommendations related to improving station metadata made in the National Academy of Sciences reports is that by simply knowing the locations of existing mesonet stations and their basic operating characteristics as part of an improved national database of station metadata, then it will be possible to infer where additional stations are needed by identifying the apparent data voids. However, IDI analyses or examinations of network metadata do not take into consideration the spatial and temporal variability arising from weather and may not identify where observations are most needed.

Observation impacts have often been evaluated through cross-validation experiments, in which a control analysis using all observations is compared to an analysis in which observations of interest are excluded from the assimilation (Seaman and Hutchinson 1985; Zapotocny et al. 2000; Myrick and Horel 2008; Benjamin et al. 2010; Tyndall et al. 2010; Horel and Dong 2010). Generally, groups of observations are withheld (e.g., observations from differing types of sensor systems or networks). Horel and Dong (2010) applied this technique extensively by sequentially withholding each of ~3000 observations evaluated in their study from ~9000 analyses, resulting in over 570 000 cross-validation experiments. While this methodology provides individual impacts for each observation, it is far too cumbersome and expensive to use operationally.

A more efficient approach to determining observation impacts utilizes the analysis adjoint that relates the analysis sensitivity to input location. Adjoints of forecast models are used routinely to assess where “targeted” observations might reduce forecast errors (Palmer et al. 1998; Buizza and Montani 1999; Langland et al. 1999; Zhu and Gelaro 2008). Baker and Daley (2000) utilized the adjoint of a simple data assimilation system to explore analysis sensitivity on an analytical function. In this study, a similar approach is applied to surface mesonet observations throughout the entire CONUS domain to estimate which types of networks tend to have high observation impacts.

The approach used in this study is described in section 2. Application of the methodology to a single case and aggregate statistics over a sample of 100 analyses are presented in section 3. Additional discussion and ongoing work related to automated quality control procedures for mesonet observations are described in section 4.

2. Method

a. Analysis

This study utilizes UU2DVar to determine the impacts of mesonet observations. The UU2DVar is a univariate two-dimensional variational data assimilation (2DVar) analysis tool that generates meteorological analyses of 2-m air temperature, 2-m dewpoint temperature, 10-m u and υ wind components, and surface pressure (Tyndall 2011). The analysis tool is designed to minimize the variational cost function, J(xa),
e1
to compute the analysis (xa) from observations (yo), the background grid (xb), and the forward operator () subject to assumptions regarding the background (b) and observation (o) error covariances. For efficiency, the variational cost function (1) is reformulated to observation space (Lorenc 1986; Cohn et al. 1998; Daley and Barker 2001):
e2
e3
The term η is computed by iteratively solving (2) and is used to yield the analysis xa in (3). Using observation space to minimize the cost function is very efficient for undersampled problems (Daley and Barker 2001) such as mesoscale surface observation assimilation where the number of observations is much less than the number of analysis grid points. The resolution of the UU2DVar analyses is dependent upon the resolution of the background fields, which in this study are Rapid Update Cycle (RUC) 1-h forecasts (Benjamin et al. 2004) downscaled from 13- to 5-km resolution. These RUC backgrounds also served as the background fields for the RTMA (de Pondeca et al. 2011) until the RUC model was discontinued and replaced by the Rapid Refresh model in May of 2012.
Parallelization of the UU2DVar was completed using Matlab software for this study (Tyndall 2011). Usage of parallelization allows efficient computation of the background error covariance (b), which is horizontally and vertically spatially dependent through the product of two inverse exponential functions that depend on horizontal (R) and vertical (Z) decorrelation length scales:
e4
where σb is the background error variance and rij and zij are the horizontal and vertical distances between grid points i and j. The vertical term in the equation adds terrain anisotropy, which prevents an observation in a valley from influencing areas of the analysis over nearby mountain ridges. The vertical term also controls the influence of coastal observations, as water analysis grid points have their elevation reduced by 500 m in the covariance computation to prevent land observations from influencing them. The horizontal and vertical decorrelation length scales used in this study were 80 km and 200 m, respectively. These length scales, as well as the background to observation error variance ratio, were estimated by Tyndall et al. (2010) for the same RUC background fields using the methodology of Lönnberg and Hollingsworth (1986). These assumed decorrelation length scales do influence some of the results of this study, as they define a priori the spatial scales over which an observation is most likely to influence the analysis. Since the functional form of the covariance in (4) only asymptotes to 0 at long distances, correlations between grid points separated by more than 300 km are set to 0 to improve computational efficiency.

b. Observations

Nearly all observations from the 125 mesonet networks used in this research are part of the MesoWest database of publicly accessible observations (Horel et al. 2002). A continually updated list of networks available in MesoWest is available online. Two proprietary networks [WeatherFlow’s largely coastal network and the Oklahoma Mesonet; McPherson et al. (2007)] were added with permission as they help to illustrate networks that would likely be included in a national network of networks. The networks used in this study are deployed by entities for specific purposes and can be broadly separated into the 10 categories listed in Table 1. The median IDI values computed from the locations of all stations within each category reflect differences in station density, for example, primarily urban stations (IDI values > 0.95 for the PUBLIC and AQ categories) versus rural or isolated stations (IDI values < 0.90 for the RAWS, EXT, and HYDRO categories).

Table 1.

Mesonet categories based on purpose and type of network, total number of stations and the median IDI value for that category, and the number of observations and assumed observation to background error variance ratio for each variable.

Table 1.

Based on subjective experience working with data from these networks and prior research (Tyndall et al. 2010), we have attempted to classify loosely the relative magnitude of the mesonet observation errors compared to the background errors. These assumptions are made separately for temperature, moisture, and wind observation errors for each of the 10 network categories. Given the subjectivity of these assumptions, our approach can be viewed as a sensitivity study in which the available stations are assigned varying error variance ratios to evaluate the dependence of our results on these assumptions. Higher ratios reduce the influence an observation will have on the resulting analysis in the vicinity of that observation. As a baseline, observations from the National Weather Service/Federal Aviation Administration (NWS/FAA) reporting stations (NWS category) and other federal and state networks (FED+ category) that tend to adopt standardized installation, siting, and maintenance procedures are assigned observation to background error variance ratios of 1.0. This ratio implies that for an isolated observation, the resulting analysis in the vicinity of the observation will approximate roughly the average of the observation and background values.

Higher observation to background error variance ratios of 1.5 and 2.0 are assigned for some observation types and network categories to reflect the characteristics of many of these networks or as an attempt to account for representativeness errors. For example, the higher 2.0 observation to background error variance ratio is used for wind observations from the Remote Automated Weather Station (RAWS) category because 1) the RAWS standard for wind sensor height (6 m as opposed to the 10-m standard for NWS and FED+ categories) leads to lower observed wind speeds relative to the 10-m background wind and 2) many of the RAWS stations are sited for fire weather applications in rugged locations with highly variable terrain and vegetation for which the observations may not represent the conditions over nearby 5-km analysis grid boxes. Similarly, higher wind observation errors are also assumed for the AG and PUBLIC categories since many agricultural networks rely on 3-m towers, and PUBLIC sensors are often mounted on or near residences with nearby obstructions commonplace. As will be presented later in this work, the assigned error variance ratios play a tertiary role when assessing observation impacts.

To prevent clearly erroneous observations from entering the analyses in this study, we used a manual blacklist and observation innovation (observation minus background) checks. The blacklist was prepared by subjectively rejecting observations from stations exhibiting both large mean observation innovations combined with large mean impacts (as defined in the next subsection) when the 100 analyses were computed using all available observations. The innovation control check is summarized by
e5
where ɛm is a tunable coefficient, τqc is a tunable quality control threshold (in units of the observations), and are background field values no further than 40 km from the observations. The functions max and std dev are the maximum and standard deviation functions operating on the arguments, respectively. The tunable quality control threshold was added to prevent observations from being rejected in situations such as offshore areas where the variance in the background field is usually small. In this study, ɛm is 10 for all variables, and τqc is set to 3°C, 4°C, and 7.5 m s−1 for air temperature, dewpoint temperature, and wind values (both components and speed), respectively. This check is designed to retain observations in areas where the background exhibits large localized variability due to terrain or sharply defined weather features (e.g., drylines). We did not implement for this study adjustments to the forward operator to account for elevation mismatches between the sensor and grid (e.g., temperature observations at Mount Rainier, Washington, where the analysis grid elevation is much lower than the sensor elevation).

An additional quality control step was applied to wind observations to reject very light winds when the background values were much higher; that is, wind observations were not used when the observed speed was less than 1 m s−1 and the background speeds exceeded 5 m s−1. Such situations arise frequently due to a variety of siting and reporting reasons, and are exacerbated at night during periods of weak synoptic flows. For example, the lowest reported wind speed for NWS observations is 1.25 m s−1; hence, a “calm” report when the background suggests a relatively strong wind is ignored.

c. Observation impact

As mentioned in the introduction, the adjoint of a data assimilation tool can be used to evaluate individual observation impacts in a more efficient manner than traditional cross-validation experiments using data withholding, although the two approaches will not yield identical results. The enhanced efficiency at the expense of exactness results from the assumed linearity of the system and its unaltered Kalman gain matrix. Our measure of observation impact follows that used in observation space by Langland and Baker (2004) and in analysis space by Zhu and Gelaro (2008). Observation impact is defined with respect to a sensitivity cost function, , which is often a forecast variable of interest over a particular subdomain. In this research, is specified to be the squared differences between the analysis and the background field [ = 0.5(xaxb)2. Following Baker and Daley (2000), observation sensitivity is found using the chain rule:
e6
The measure of observation impact, , follows that previously defined by Zhu and Gelaro (2008) as the scalar product of observation sensitivity and the observation innovations:
e7
where denotes the scalar product operator.

Observation impacts were computed over 100 analysis hours for all available surface observations not rejected by the manual blacklist. The 100-analysis-hour sample was composed of the 0000, 0600, 1200, and 1800 UTC analyses corresponding to multimillion dollar severe weather damage between October 2010 and April 2011. The dates of each event, type of severe weather, and areas affected are listed in Table 2.

Table 2.

Sample of 25 high-impact weather days used to evaluate observation impacts.

Table 2.

The wide range of values of observation impact (as a function of network category, variable, location, and date) led us to rank their values from smallest to largest for each variable. Negative impacts occur where the deviation of an observation from the background differs in sign relative to its neighbors, which may reflect either an erroneous observation or a realistic weather phenomenon on a scale smaller than that assumed a priori for the background errors. The percentile ranks of the observation impacts are aggregated for each station or for all stations in a network category for each variable for either individual cases or over the entire 100-analysis sample. Hence, a station’s temperature observations are “high impact” if its observation percentile rank impact values are large relative to the percentile impact values for temperature at other stations.

3. Results

a. 0000 UTC 4 April 2011

Our approach is illustrated using an analysis at the start of a 2-day (4 and 5 April 2011) severe weather period. The severe weather on these 2 days produced numerous tornado, hail, and high-wind reports over much of the Midwest and southeastern United States, particularly later on 4 April, and caused three fatalities and over 20 injuries. The severe weather was associated with prefrontal conditions ahead of the cold front stretching from Iowa to Arizona evident in the 0000 UTC 4 April 2011 weather summary (Fig. 2).

Fig. 2.
Fig. 2.

Hydrometeorological Prediction Center 0000 UTC 4 Apr 2011 surface analysis showing positions of frontal structures and dryline over CONUS.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

At 0000 UTC 4 April, clear contributions of mesonet observations to the temperature, dewpoint, and wind speed analyses were evident near the front and dryline locations as well as elsewhere within the CONUS (Fig. 3). As shown by the analysis increments in the right panels of Fig. 3, the addition of the observations led to higher temperatures and lower dewpoint temperatures than the background south of the warm front in the Midwest and lower wind speeds north of the warm front. Additionally, the observations led to substantive adjustments to the background in mountainous areas of the West, for example, higher temperatures and dewpoints over the southern Sierra Nevada Mountains in California and lower wind speeds over portions of Arizona and New Mexico.

Fig. 3.
Fig. 3.

(a),(c),(e) Meteorological surface analyses and (b),(d),(f) analysis increments for 0000 UTC 4 Apr 2011 over CONUS.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

The analysis increments in Fig. 3 show the collective influence of all observations on the analysis. Figure 4 illustrates the influence of each observation for selected network categories for 2-m temperature based on the analysis adjoint methodology. Red (blue) circles at station locations in Fig. 4 denote percentile ranks of observation impact in the top 25% (lowest 25%) for this temperature analysis. The four network categories shown in Fig. 4 reflect the diversity of network density, observation siting, and sensor quality. The NWS category consists of professional grade equipment sited primarily at airports around the country while the FED+ category represents an aggregate of national and regional federal networks (e.g., Climate Reference Network and Modernized Cooperative Observer Program), local federal networks [e.g., Field Research Division of the National Oceanic and Atmospheric Administration (NOAA)/Air Resources Laboratory in eastern Idaho], and state networks (e.g., Oklahoma, west Texas, New Jersey). The RAWS category discussed previously is composed of stations that tend to be located in remote locations and often exhibit representativeness errors, while the PUBLIC category constitutes the largest number of stations, which are often densely distributed in urban areas and rely frequently on lower-grade sensors sited on residences.

Fig. 4.
Fig. 4.

Observation impact percentile for 2-m temperature observations used in the 0000 UTC 4 Apr 2010 temperature analysis from four network categories.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

For this particular temperature analysis, many of the NWS temperature observations around the CONUS fall in the “typical” impact range found for most observations, i.e., percentile ranks between 25% and 75% (Fig. 4, top left). The NWS stations at this time with the largest impacts (red colors) tend to be clustered near the warm front in the upper Midwest where the background underestimated the strength of the warm front. The NWS temperature observations with the least impact are found in Virginia and Maryland (where the IDI is high) and many coastal areas (where the background is generally good). The FED+ stations in Maine tend to have high impact (due to a cooler background field than observed) while those in eastern Idaho exhibit individually modest impacts at this time (Fig. 4, top right).

Even though we a priori “trust” RAWS temperature observations less than NWS of FED+ observations, many RAWS temperature observations fall in the upper quartile of impact particularly in the Sierra Nevada and the Cascade Mountains of the Pacific states (Fig. 4, bottom left). These observations lead to the upward (downward) adjustment of the background temperature in the Sierras (western Colorado) evident in Fig. 3. It is obvious that in most major urban areas, the impacts of observations in the PUBLIC category on the analyses are small (e.g., Los Angeles, California; San Francisco, California; Seattle, Washington; Dallas, Texas; coastal Florida, and Washington, D.C.; see Fig. 4, bottom right). This results from the large number of observations in those areas that reduces the observation sensitivity [Eq. (6)], as well as our assumption of higher observation to background error variance ratios for these stations. However, the limited impact of these observations is as much determined by the weather at this time in those locations, as NWS observations in those locations also tend to have little impact. Since the PUBLIC observations differ from the background substantively in Illinois, Indiana, and Ohio, those observations influence the analysis as much as other stations that we trust more (Fig. 4, bottom right).

Hence, observation impact as defined in this study depends on a complex blend of the assumptions regarding observation and background errors, local observation density, and the local-, regional-, and synoptic-scale weather taking place at any one time. To reduce the dependency on specific weather features at specific times, we now examine the observation impact over a representative sample of 100 analyses.

b. Observation impacts over 100 analysis hours

Although the 100 analyses used in this section are associated with severe, atypical weather in specific locales (Table 2), they reflect a reasonable mix of typical weather conditions when viewed on the CONUS scale. Following the previous case study, the percentile ranks of the observation impacts are computed for each variable and analysis. Then, the median percentile rank over the sample of 100 cases is determined for the observations at a specific station. Those stations that consistently influence the analyses more will have median values in the upper quartile while most stations will have less effect or influence analyses on only a few occasions within this sample.

Figure 5 summarizes the impact of temperature observations from the four network categories previously highlighted in Fig. 4. Not all NWS stations exhibit high impact on temperature analyses. For example, temperature observations from NWS stations near Los Angeles and San Francisco exhibit some of the lowest impacts. Hence, the cumulative number of observations combined with the generally limited variability in temperature in those areas requires less adjustment of the background field and reduces the impact of any one station. On the other hand, NWS observations have large impact in North and South Dakota and many locations in Colorado, as well as offshore areas in the Gulf of Mexico. In addition, many of the stations in NOAA’s Special Operations and Research Division (SORD) network in southern Nevada included in the FED+ category have a large impact on the temperature analyses.

Fig. 5.
Fig. 5.

Median impact percentiles for temperature observations computed over 100 analysis hours for four selected network categories.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

When evaluated over the entire 100 analysis sample, RAWS temperature observations frequently have a large impact in rugged areas even though we have assumed those observations to have larger errors. The impacts of temperature observations from the PUBLIC network category are clearly tied to the local density of observations with lower impacts in many metropolitan areas but substantive impacts in many other locales around the country.

NWS dewpoint and wind observations have larger impacts relative to temperature observations across the central swath of the country in Figs. 6 and 7, respectively. This suggests greater sensitivity where dewpoint temperatures and wind speeds vary greatly from day to day and large adjustments to the background fields are common. The impacts of dewpoint temperature and wind speed observations in the FED+ category from the Oklahoma Mesonet and West Texas Mesonet stand out for similar reasons. The impacts of RAWS dewpoint and wind observations tend to reflect representativeness issues, with higher impacts in regions of complex terrain.

Fig. 6.
Fig. 6.

As in Fig. 5, but for dewpoint temperature.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

Fig. 7.
Fig. 7.

As in Fig. 5, but for wind speed.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

To examine the influence of observations from individual stations from all 10 network categories on the analyses, Figs. 8 and 9 show the impacts of temperature observations in northern Utah and wind observations in southern California, respectively. As with the results shown in Figs. 57, these percentile impacts in Figs. 8 and 9 are relative to all CONUS stations. The dependence on local station density is quite apparent, with no one station in any category having a large impact on the temperature analyses in the Salt Lake Valley [near Salt Lake City, Utah (SLC), labeled in the EXT panel]. Not surprisingly, the stations over the Great Salt Lake from the air quality (AQ) and local network categories have a large impact, as they help to take into consideration the unique weather conditions over that water body. Temperature observations from transportation networks exhibit a wide range of sensitivities in this region, with low impact near SLC and high impact from some rail network stations in southeastern Idaho, as well as from road weather stations across the southern portions of the domain.

Fig. 8.
Fig. 8.

As in Fig. 5, but for northern UT for all 10 network categories. The 5-km analysis terrain (m) is shaded, with blue areas denoting water grid points. Location of the aviation routine weather report (METAR) observation at Salt Lake International Airport is marked in the EXT panel.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

Fig. 9.
Fig. 9.

As in Fig. 8, but for wind speed observations over southern CA. Locations of the METAR observations at Los Angeles and San Diego International Airports are marked in the FED+ panel.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

The impacts of surface wind observations in southern California suggest that the specification of higher observation error variances assumed here for some network categories compared to others is not a controlling factor. For example, most coastal stations have low impact (except for the few land observations located erroneously over water grid points in the analysis), implying that the background fields are not substantively adjusted from either NWS or PUBLIC stations. However, further into the interior of the San Diego and Los Angeles basins, most stations have moderate impact, independent of whether they are NWS, RAWS, AQ, or PUBLIC stations. The analyses in this subdomain are most strongly affected by the relatively few stations in nearly all categories in the high desert regions north of the San Gabriel and San Bernardino Mountains across the center of this subdomain.

The salient results of our study are summarized in Fig. 10. The fraction of the total number of stations in each of the 10 network categories that have observation impacts in the upper quartile accumulated over all 100 cases are shown. (The percentage of the total station count in each category is used because the total number of stations in each network category varies widely; see Table 1.) If every station had a comparable impact, then all the bars would fall along the thick black line; that is, 25% of the reports from any network would be in the upper quartile. Network categories with fractions greater (less) than 25% have more high (low or moderate) impacts than low or moderate (high) impact observations. For reference, the assumed observation to background error variance ratio is indicated in Fig. 10 for each network category and variable as well. If these assumptions completely controlled our results, then it would be expected that networks that are “trusted” less (i.e., error variance ratios of 1.5 or 2.0) would have fewer stations with large impacts than those that serve as the baseline (i.e., error variance ratios of 1.0). However, that is not the case.

Fig. 10.
Fig. 10.

Fraction of reports from each network category with observation impacts in the upper quartile over 100 analysis hours for temperature, dewpoint, and wind speed. The assumed observation to background error variance ratio (see Table 1) is labeled at the top of each bar.

Citation: Weather and Forecasting 28, 1; 10.1175/WAF-D-12-00027.1

Consider first the impacts of the network categories shown in Figs. 49: NWS, FED+, RAWS, and PUBLIC. As expected, a larger fraction of observations from the NWS and FED+ network categories have substantive impacts on all three variables relative to many other network categories. Figure 10 also reinforces the spatial displays in Figs. 49 that a larger (smaller) fraction of RAWS (PUBLIC) stations have large impacts. These results are consistent with the interplay between the ability of the background to resolve the observed weather, the local station density, and representativeness errors as opposed to simply the specification of the observation error variance.

Now consider the cumulative statistics for the other six network categories, beginning with the networks intended primarily for agricultural applications (AG). The AG impacts hover around what might be expected for an “average” station, close to 25%. As mentioned before, AG category wind observations are assigned larger observation errors primarily because common practice for agricultural applications is to use 3-m towers. Air quality networks (AQ) tend to have lower than expected impacts, presumably due to their locations in urban areas where many other stations are available (i.e., similar behavior to PUBLIC stations). They have been assigned intermediate observation errors primarily due to the Environmental Protection Agency standard for reporting observations in terms of time averages as long as 60 min in contrast to shorter averaging periods used for stations in other network categories.

Canadian, Mexican, and offshore observations in the EXT category exhibit the highest fraction of large impact observations for all three variables. The EXT observations do exhibit a tendency to substantively influence the analyses for all three variables outside the CONUS, which results from a combination of their locations in data-sparse areas as well as their tendency to depart substantially from the background fields in those areas. The stations aggregated into the hydrological category (HYDRO) are often located in remote high-elevation sites and report primarily precipitation and temperature (see Table 1). Hence, of most interest is the large impact of the HYDRO temperature observations even though they have been assigned a higher observation to background error variance ratio to compensate for representativeness errors.

The LOCAL network category is the most complex and consists of a diverse mix of networks available to MesoWest (e.g., NWS Weather Forecast Office local networks, as well as commercial and other local networks). We have assigned these to have intermediate error characteristics reflecting differences in siting and reporting standards. The LOCAL and PUBLIC network categories are loosely comparable in their overall characteristics and, hence, it is not too surprising that the both LOCAL and PUBLIC network categories have more low- than high-impact observations. Finally, the transportation network category (TRANS) consists of commercial rail and state Road Weather Information System (RWIS) stations. Overall, the impact of TRANS stations is high, but that is regionally dependent. RWIS and rail stations in the eastern half of the country tend to exhibit behavior common to other stations in urban areas while those in the west tend to be located in a mix of urban and rural locations and have a larger impact on the analyses (not shown).

4. Discussion

Two-dimensional analyses on a 5-km grid over the continental United States of surface temperature, moisture, and wind were computed for a sample of 100 cases using the variational cost function solved in observation space. This study used the 1-h forecast grids from the RUC data assimilation system downscaled by National Centers for Environmental Prediction (NCEP) for the RTMA 5-km analyses as background fields combined with observations from nearly 20 000 locations.

The fundamental results of our study can be summarized by the oft-repeated phrase: location, location, location. Our metric of impact draws attention to the location of the observations in terms of the interplay between the weather conditions observed there, the ability of the background field to diagnose the variability of those conditions, and the proximity of other nearby observations. Of lesser importance are the types of networks from which the observations are obtained and the assumptions made regarding their observational errors.

Solely in terms of their impact on high-resolution analyses, observations in major metropolitan areas tend to have reduced impact simply because there are so many other stations tending to suggest similar adjustments to the background fields. On the other hand, observations in more remote locations tend to have a higher impact. However, the observation impact metric by itself cannot distinguish between observations that have high impact due to gross errors, representativeness errors, or failure of the background fields to diagnose local weather conditions.

Attribution for the sources of discrepancies between observations and background grids is not clear cut and is all too often assumed to depend strongly on the type of station rather than the dominant effect of location and the weather experienced there. For example, the NWS observations of wind speeds at Cape Hatteras, North Carolina (KHSE), are typically several meters per second weaker than the background wind speeds (not shown explicitly but evident in Fig. 7 as one of the high-impact stations immediately offshore of North Carolina). This situation would be considered a representativeness error, since the reduced wind speeds only reflect conditions observed in a narrow 1–3-km strip of land. It is unlikely that any data assimilation system would reject the KHSE observations due to its bias because NWS observations are generally considered to be well maintained and accurate. However, the only other observing site on the cape is located 8 km to the northeast and is a PUBLIC station (Buxton, North Carolina, D6557). Although that station would often be assumed a priori to provide inferior observations, the bias of the observations relative to the background grid and their impacts are comparable to those at KHSE. The consistency in bias and impact between the two observing sites suggests that D6657 can provide valuable information for a variety of applications.

This study has been directed in part toward developing improved automated quality control algorithms for mesonet observations. The manual blacklist used to identify egregiously poor observations within the 100-h sample used in this study is not practical for routine use. The RTMA analyses generated by the National Centers for Environmental Prediction do rely in part on manual blacklists maintained and updated by National Weather Service forecasters around the country (de Pondeca et al. 2011). A problem with manual blacklists is the difficulty they have in determining when observations from a rejected station may no longer be in error. As summarized by Fiebrich et al. (2010), many mesonets and mesonet aggregators (e.g., MesoWest and the Meteorological Assimilation Data Ingest System, MADIS) use automated quality control checks to identify erroneous observations. One common approach is to perform “buddy” checks that compare observations from one location with others nearby.

We are implementing the UU2DVar hourly analyses as part of expanded real-time quality control and impact assessment for observations of temperature, relative humidity, and wind. These UU2DVar analyses are being computed using the Python open source programming language as opposed to the Matlab software package used in this study. We monitor differences between the background fields and observations for each hour of the day and accumulated over several weeks as a bias metric to help assess observation quality. To reduce representativeness errors in the UU2DVar analyses, we have now included the simplified adaptive bias correction scheme described by Dee (2005). Bias correction grids for all parameters are continuously updated separately for each hour of the day and the bias adaptivity parameter, γ, is set to 0.15 to allow the bias corrections to evolve slowly with time. Impact values computed from the resulting analyses are tied strongly to weather events and nearby station density. We are beginning to routinely accumulate statistics on bias and impact for individual stations and networks as well as the network categories listed in Table 2.

The development of a national network of networks as recommended by the National Academy of Sciences (2009, 2010) should be a high priority. There is considerable potential to develop a cost-effective national network that takes advantage of the surface observations collected by hundreds of agencies, commercial firms, educational institutions, and the public. Understanding the strengths and weaknesses of the existing networks requires improved metadata combined with studies on the relative impacts of those networks.

Acknowledgments

We wish to thank Xia Dong for her participation in the application of the UU2DVar, Zachary Hansen for his assistance with the manual blacklisting of the observations, and Matthew Lammers for his investigations of observation bias and impact. We also appreciate the three anonymous reviewers for their very helpful comments that improved this paper. This research was supported by the National Ocean and Atmospheric Administration under Grant NA10NWS468005 as part of the CSTAR program.

REFERENCES

  • Alexander, C., and Coauthors, 2012: Evaluation of High Resolution Rapid Refresh (HRRR) model changes and forecasts during 2011. Preprints, Third Aviation, Range, and Aerospace Meteorology Special Symp. on Weather-Air Traffic Management Integration, New Orleans, LA, Amer. Meteor. Soc., P545. [Available online at https://ams.confex.com/ams/92Annual/webprogram/Paper200890.html.]

  • Baker, N. L., , and Daley R. , 2000: Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Quart. J. Roy. Meteor. Soc., 126, 14311454.

    • Search Google Scholar
    • Export Citation
  • Benjamin, S., and Coauthors, 2004: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132, 495518.

  • Benjamin, S., , Jamison B. D. , , Moninger W. R. , , Sahm S. R. , , Schwartz B. E. , , and Schlatter T. W. , 2010: Relative short-range forecast impact from aircraft, profiler, radiosonde, VAD, GPS-PW, METAR, and mesonet observations via the RUC hourly assimilation cycle. Mon. Wea. Rev., 138, 13191343.

    • Search Google Scholar
    • Export Citation
  • Brown, J., and Coauthors, 2012: Rapid Refresh replaces the Rapid Update Cycle at NCEP. Preprints, 2012 Canadian Meteorological and Oceanographic Society Congress/21st Conf. on Numerical Weather Prediction/25th Conf. on Weather and Forecasting, Montreal, QC, Canada, CMOS and Amer. Meteor. Soc., 3B1.2. [Available online at https://www1.cmos.ca/abstracts/abstract_print_view.asp?absId=5721.]

  • Buizza, R., , and Montani A. , 1999: Targeting observations using singular vectors. J. Atmos. Sci., 56, 29652985.

  • Cohn, S., , da Silva A. , , Guo J. , , Sienkiewicz M. , , and Lamich D. , 1998: Assessing the effects of data selection with the DAO Physical-space Statistical Analysis System. Mon. Wea. Rev., 126, 29132926.

    • Search Google Scholar
    • Export Citation
  • Dabberdt, W. F., and Coauthors, 2005: Multifunctional mesoscale observing networks. Bull. Amer. Meteor. Soc., 86, 961982.

  • Daley, R., , and Barker E. , 2001: NAVDAS: Formulation and diagnostics. Mon. Wea. Rev., 129, 869883.

  • Dee, D. P., 2005: Bias and data assimilation. Quart. J. Roy. Meteor. Soc., 131, 33233343.

  • de Pondeca, M. S. F. V., and Coauthors, 2011: The real-time mesoscale analysis at NOAA’s National Centers for Environmental Prediction: Current status and development. Wea. Forecasting, 26, 593612.

    • Search Google Scholar
    • Export Citation
  • Fiebrich, C. A., , Morgan C. R. , , McCombs A. G. , , Hall P. K. , , and McPherson R. A. , 2010: Quality assurance procedures for mesoscale meteorological data. J. Atmos. Oceanic Technol., 27, 15651582.

    • Search Google Scholar
    • Export Citation
  • Glowacki, T. J., , Xiao Y. , , and Steinle P. , 2012: Mesoscale surface analysis system for the Australian domain: Design issues, development status, and system validation. Wea. Forecasting, 27, 141157.

    • Search Google Scholar
    • Export Citation
  • Haiden, T., , Kann A. , , Wittmann C. , , Pistotnik G. , , Bica B. , , and Gruber C. , 2011: The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the eastern Alpine region. Wea. Forecasting, 26, 166183.

    • Search Google Scholar
    • Export Citation
  • Horel, J., , and Colman B. , 2005: Real-time and retrospective mesoscale objective analyses. Bull. Amer. Meteor. Soc., 86, 14771480.

  • Horel, J., , and Dong X. , 2010: An evaluation of the distribution of Remote Automated Weather Stations (RAWS). J. Appl. Meteor. Climatol., 49, 15631578.

    • Search Google Scholar
    • Export Citation
  • Horel, J., and Coauthors, 2002: Mesowest: Cooperative mesonets in the western United States. Bull. Amer. Meteor. Soc., 83, 211225.

  • Langland, R. H., , and Baker N. L. , 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189201.

    • Search Google Scholar
    • Export Citation
  • Langland, R. H., , Gelaro R. , , Rohaly G. D. , , and Shapiro M. A. , 1999: Targeted observations in FASTEX: Adjoint-based targeting procedures and data impact experiments in IOP17 and IOP18. Quart. J. Roy. Meteor. Soc., 125, 32413270.

    • Search Google Scholar
    • Export Citation
  • Lönnberg, P., , and Hollingsworth A. , 1986: The statistical structure of short-range forecast errors as determined from radiosonde data. Part II: The covariance of height and wind errors. Tellus, 38A, 137161.

    • Search Google Scholar
    • Export Citation
  • Lorenc, A. C., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc., 112, 11771194.

  • McPherson, R. A., and Coauthors, 2007: Statewide monitoring of the mesoscale environment: A technical update on the Oklahoma Mesonet. J. Atmos. Oceanic Technol., 24, 301321.

    • Search Google Scholar
    • Export Citation
  • Myrick, D. T., , and Horel J. D. , 2008: Sensitivity of surface analyses over the western United States to RAWS observations. Wea. Forecasting, 23, 145158.

    • Search Google Scholar
    • Export Citation
  • National Academy of Sciences, 2009: Observing Weather and Climate from the Ground Up: A Nationwide Network of Networks. National Academy Press, 234 pp.

  • National Academy of Sciences, 2010: When Weather Matters: Science and Service to Meet Critical Societal Needs. National Academy Press, 199 pp.

  • Palmer, T. N., , Gelaro R. , , Barkmeijer J. , , and Buizza R. , 1998: Singular vectors, metrics, and adaptive observations. J. Atmos. Sci., 55, 633653.

    • Search Google Scholar
    • Export Citation
  • Seaman, R. S., , and Hutchinson M. F. , 1985: Comparative real data test of some objective analysis methods by withholding observations. Aust. Meteor. Mag., 33, 3746.

    • Search Google Scholar
    • Export Citation
  • Steinacker, R., and Coauthors, 2006: A mesoscale data analysis and downscaling method over complex terrain. Mon. Wea. Rev., 134, 27582771.

    • Search Google Scholar
    • Export Citation
  • Tyndall, D. P., 2011: Sensitivity of surface meteorological analyses to observation networks. Ph.D. dissertation, University of Utah, 155 pp. [Available online at http://content.lib.utah.edu/cdm/ref/collection/etd3/id/314.]

  • Tyndall, D. P., , Horel J. D. , , and de Pondeca M. S. F. V. , 2010: Sensitivity of surface air temperature analyses to background and observation errors. Wea. Forecasting, 25, 852865.

    • Search Google Scholar
    • Export Citation
  • Uboldi, F., , Lussana C. , , and Salvati M. , 2008: Three-dimensional spatial interpolation of surface meteorological observations from high-resolution local networks. Meteor. Appl., 15, 331345.

    • Search Google Scholar
    • Export Citation
  • Xie, Y., , Koch S. , , McGinley J. , , Albers S. , , Bieringer P. E. , , Wolfson M. , , and Chan M. , 2011: A space–time multiscale analysis system: A sequential variational analysis approach. Mon. Wea. Rev., 139, 12241240.

    • Search Google Scholar
    • Export Citation
  • Zapotocny, T. H., and Coauthors, 2000: A case study of the sensitivity of the Eta Data Assimilation System. Wea. Forecasting, 15, 603621.

    • Search Google Scholar
    • Export Citation
  • Zhu, Y., , and Gelaro R. , 2008: Observation sensitivity calculations using the adjoint of the Gridpoint Statistical Interpolation (GSI) analysis system. Mon. Wea. Rev., 136, 335351.

    • Search Google Scholar
    • Export Citation
Save