Sensitivity of Surface Analyses over the Western United States to RAWS Observations

David T. Myrick Department of Meteorology, University of Utah, Salt Lake City, Utah

Search for other papers by David T. Myrick in
Current site
Google Scholar
PubMed
Close
and
John D. Horel Department of Meteorology, University of Utah, Salt Lake City, Utah

Search for other papers by John D. Horel in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Federal, state, and other wildland resource management agencies contribute to the collection of weather observations from over 1000 Remote Automated Weather Stations (RAWS) in the western United States. The impact of RAWS observations on surface objective analyses during the 2003/04 winter season was assessed using the Advanced Regional Prediction System (ARPS) Data Assimilation System (ADAS). A set of control analyses was created each day at 0000 and 1200 UTC using the Rapid Update Cycle (RUC) analyses as the background fields and assimilating approximately 3000 surface observations from MesoWest. Another set of analyses was generated by withholding all of the RAWS observations available at each time while 10 additional sets of analyses were created by randomly withholding comparable numbers of observations obtained from all sources.

Random withholding of observations from the analyses provides a baseline estimate of the analysis quality. Relative to this baseline, removing the RAWS observations degrades temperature (wind speed) analyses by an additional 0.5°C (0.9 m s−1) when evaluated in terms of rmse over the entire season. RAWS temperature observations adjust the RUC background the most during the early morning hours and during winter season cold pool events in the western United States while wind speed observations have a greater impact during active weather periods. The average analysis area influenced by at least 1.0°C (2.5°C) by withholding each RAWS observation is on the order of 600 km2 (100 km2). The spatial influence of randomly withheld observations is much less.

Corresponding author address: John D. Horel, Dept. of Meteorology, Rm. 819, University of Utah, 135 South 1460 East, Salt Lake City, UT 84112-0110. Email: jhorel@met.utah.edu

Abstract

Federal, state, and other wildland resource management agencies contribute to the collection of weather observations from over 1000 Remote Automated Weather Stations (RAWS) in the western United States. The impact of RAWS observations on surface objective analyses during the 2003/04 winter season was assessed using the Advanced Regional Prediction System (ARPS) Data Assimilation System (ADAS). A set of control analyses was created each day at 0000 and 1200 UTC using the Rapid Update Cycle (RUC) analyses as the background fields and assimilating approximately 3000 surface observations from MesoWest. Another set of analyses was generated by withholding all of the RAWS observations available at each time while 10 additional sets of analyses were created by randomly withholding comparable numbers of observations obtained from all sources.

Random withholding of observations from the analyses provides a baseline estimate of the analysis quality. Relative to this baseline, removing the RAWS observations degrades temperature (wind speed) analyses by an additional 0.5°C (0.9 m s−1) when evaluated in terms of rmse over the entire season. RAWS temperature observations adjust the RUC background the most during the early morning hours and during winter season cold pool events in the western United States while wind speed observations have a greater impact during active weather periods. The average analysis area influenced by at least 1.0°C (2.5°C) by withholding each RAWS observation is on the order of 600 km2 (100 km2). The spatial influence of randomly withheld observations is much less.

Corresponding author address: John D. Horel, Dept. of Meteorology, Rm. 819, University of Utah, 135 South 1460 East, Salt Lake City, UT 84112-0110. Email: jhorel@met.utah.edu

1. Introduction

Surface weather observations are critical for a variety of applications, including the creation of high-resolution objective analyses. Demand for mesoscale surface objective analyses has grown following the introduction of the National Digital Forecast Database (NDFD; Glahn and Ruth 2003). Gridded forecasts at 5-km horizontal resolution from National Weather Service (NWS) forecast offices across the country are combined to create the NDFD. Preparation and verification of such forecast grids would benefit from objective analyses. A prototype analysis [the Real-Time Mesoscale Analysis (RTMA)] is now being generated by the National Centers for Environmental Prediction (M. Pondeca 2006, personal communication). Additional research and development is required to develop an “analysis of records (AORs),” that is, the best possible analyses of the atmosphere at high spatial and temporal resolution to specify weather and climate conditions near the surface (Horel and Colman 2005).

One of the largest government-supported observing networks in the western United States is the Remote Automated Weather Stations (RAWS) network. Federal, state, and other wildland resource management agencies have been contributing to the collection of weather observations since the 1970s with over 1000 stations available currently in the West (Zachariassen et al. 2003). To support the operations and decision making required by these resource agencies, RAWS stations tend to be located in fire-prone remote locations, that is, in the foothills and lower slopes of mountain ranges (Fig. 1a). With the exception of the stations located primarily at high elevation that comprise the National Resource Conservation Service’s Snowpack Telemtry (SNOTEL) network [information online at http://www.wcc.nrcs.usda.gov/snow/; Serreze et al. (1999)], the majority of mesonet surface-observing sites in the West tend to be clustered near population centers or along transportation corridors (Fig. 1b). Although there are obvious concerns about the instrumentation, siting, and maintenance standards at many stations, mesonet observations help to supplement the network of Automated Surface Observing System (ASOS) stations maintained by the National Weather Service (NWS), the Federal Aviation Administration, and the Department of Defense (Horel et al. 2002).

Observing system simulation experiments (OSSEs) are typically used to examine the influence of observations on forecasts (Arnold and Dey 1986; Atlas 1997). Data-denial or data-withholding experiments are widely used to assess analysis error and the influence of specific sets of observations on data assimilation systems (e.g., Seaman and Hutchinson 1985; Cardinali et al. 2003). An estimate of the analysis error is available for the RTMA by withholding 10% of the observations until the last stages of the iterative variational analysis and comparing the withheld observations to the nearby values of the nearly complete analysis (M. Pondeca 2006, personal communication). Alternatively, estimates of analysis sensitivity determined at every grid point (rather than only near the excluded observations) have been used by Zapotocny et al. (2000, 2002, 2005) to assess the impact of operational datasets on 0-h analyses produced by the Eta Data Assimilation System.

To illustrate the goals of this study, we first present a data-denial experiment for a case during winter characterized by upper-level ridging and cold pools in valleys and basins in the interior regions of the western United States. Surface temperature is often difficult to analyze objectively in such cases. In a study verifying NDFD gridded forecasts, Myrick and Horel (2006, henceforth referred to as MH06) investigated a cold pool event over the Great Basin (14–16 January 2004). MH06 compared NDFD gridded forecasts to surface objective analyses created at the University of Utah using the Advanced Regional Prediction System (ARPS) Data Assimilation System (ADAS). For these cases, MH06 found that NDFD 48-h forecasts valid at these times underforecast the magnitude of the cold pools over northern Utah, southern Idaho, and southwest Wyoming (MH06, their Figs. 6a and 7a).

We focus in this example on portions of Utah, Idaho, and Wyoming where the surface cold pools are evident in the ADAS analysis valid at 0000 UTC 16 January 2004 (Fig. 2a). Hundreds of mesonet observations located in this region (Fig. 2b) were incorporated into this control ADAS analysis (Fig. 2a).1 The difference (Fig. 2c) between the control analysis and the one created without any RAWS observations indicates that higher temperature (by 1°–2°C) is evident in the control over the mountainous regions surrounding the Snake River plain and Uinta Basin (denoted by S and U, respectively, in Fig. 2b). For comparison, the difference between the control analysis and an analysis obtained after omitting randomly the same number of observations from all sources (including RAWS and SNOTEL in the mountains) results in Fig. 2d. The impact of randomly removing some of the observations was small because other stations were available nearby to modify the background field (compare the locations in Figs. 2b and 2d). However, some observations withheld on the slopes or at higher elevation (valleys) did contribute to warmer (colder) temperature in the control analysis.

The goal of this study is to quantify the impact of RAWS observations on surface objective analyses in the western United States during winter. Agencies and firms that invest in the deployment and maintenance of environmental monitoring stations are interested in obtaining feedback on the impact of their observations for their own and other applications. Even though fire danger is low during winter in many areas of the West, assessing the added benefits of the availability of RAWS observations during winter relative to the cost of year-round collection is a relevant issue. In addition, several efforts have been proposed to expand the automated collection of surface observations in the West. For example, the National Oceanic and Atmospheric Administration (NOAA) has considered developing the Environmental Real-Time Observation Network (NERON) as a part of NOAA’s modernization of the Cooperative Observer Network (COOP) program with the goal of siting at least one station every 1000 km2 (Crawford and Essenberg 2006). Further, the National Integrated Drought Information System (NIDIS) has been proposed to enhance atmospheric and hydrologic monitoring in the West (Western Governors’ Association 2004). A prudent step before incurring the costs of installation and maintenance of new or expanded networks is to evaluate the impact of existing ones.

Data-withholding experiments are employed to assess the impact of RAWS observations on ADAS temperature and wind speed objective analyses during winter. The specific questions to be addressed in this study are as follows:

  • What is the impact on the quality of the ADAS surface analyses if RAWS observations are withheld versus withholding observations that are randomly chosen from all available sources?

  • What is the sensitivity of the ADAS analyses at all analysis grid points if RAWS observations are withheld as a function of location, synoptic regime, and time of day?

  • What is the typical spatial influence upon the ADAS analyses of the existing observations available in the western United States?

Details about the ADAS objective analyses, mesonet observations, and experimental design follow in section 2. In section 3, results from the data-denial experiments are shown. A discussion of the results and the conclusions follow in section 4.

2. Experimental data and design

Surface temperature and wind speed observations from 18 November 2003 to 7 March 2004 are used in this study. The observations are provided by MesoWest (Horel et al. 2002), a database maintained at the University of Utah that collects surface weather data from over 150 government agencies and commercial firms across the United States. Approximately 3000 (2300) temperature (wind speed) observations were available on average at each analysis time during the study period, of which approximately 900 were RAWS observations. The 2003/04 winter season was chosen in part for this project because considerable effort has already been spent quality controlling the MesoWest observations for the NDFD verification study described by MH06.

The surface analyses examined in this study are created at the University of Utah using the Advanced Regional Prediction System (ARPS; Xue et al. 2000, 2001, 2003) Data Assimilation System (ADAS). For the objective analysis, ADAS employs the Bratseth method of successive corrections, an inexpensive analysis procedure that has been shown to converge to the same solution as optimal interpolation (Bratseth 1986; Kalnay 2003). The ADAS analyses are created using the same terrain and 5-km grid as defined by the NDFD (Glahn and Ruth 2003). For the background field, operational Rapid Update Cycle (RUC) analyses at 20-km horizontal resolution provided by the National Centers for Environmental Prediction (Benjamin et al. 2004) are downscaled by horizontal bilinear interpolation to the 5-km NDFD grid. The RUC analyses are then adjusted by MesoWest surface observations to obtain the ADAS analyses. More information about the configuration of the University of Utah version of ADAS is provided by Lazarus et al. (2002) and Myrick et al. (2005). For this study, the magnitude of the background and the observation error variances for the ADAS analyses were set to be equal (see MH06 for further details).

Twelve different ADAS objective analyses were created for each analysis time during the study period: a control analysis that used all available observations, an analysis obtained after withholding all available RAWS observations, and 10 analyses that randomly withhold observations from all sources (including RAWS). Randomly withholding observations from the analysis multiple times provides a baseline to examine the impact of excluding the RAWS observations. The number of randomly withheld observations was determined by the number of available RAWS observations at each analysis time, typically amounting to 900 or about 30% of the total observations available. The decision to withhold the same number of random observations as RAWS was made so that the impact of the RAWS observations could be directly compared to a dataset of the same size. Hence, since we withhold roughly 30% of the observations 10 times, each observation is likely to be withheld roughly 3 times in various combinations with other stations. Analysis times with fewer than 800 RAWS observations were not included, leaving 93 (94) total analyses at 0000 (1200) UTC during the 2003/04 winter season (18 November 2003–7 March 2004).

To answer the three questions posed in the introduction, three metrics are calculated: accuracy, sensitivity, and areal influence. The accuracy of the ADAS objective analyses is estimated by calculating the root-mean-square error:
i1520-0434-23-1-145-e1
where oi are the withheld observations, ai are the analysis values at the nearest grid point to the denied observations, and N is the total number of observations withheld. The rmse of the control analyses accumulates the squared departures of the gridpoint values adjacent to the dependent sets of observations. The rmse’s of the analyses when the RAWS or random sets of observations are removed provide an indication of the analysis error based on independent information not used in the analysis. However, since there are roughly 900 RAWS (3000 total) observations available at each time, only 900 (3000) of the 162 502 grid points over land contribute to this evaluation of the analysis error.2

This rmse estimate of analysis accuracy ignores observational error. Equipment and maintenance standards, siting, misreporting of station metadata, and the representativeness of the observations to characterize the conditions on the scale of an analysis grid box (25 km2) are particularly critical for mesonet observations. MH06 attempted to estimate the magnitude of the total observational error in the temperature dataset used in this study and found those errors to be large (order 2°C). Hence, our rmse estimates of analysis error in this study should be viewed as relative, not absolute, measures of analysis accuracy.

Following Zapotocny et al. (2000, 2002, 2005), the sensitivity of the ADAS analyses to the set of observations used is assessed by calculating the root-mean-square sensitivity:
i1520-0434-23-1-145-e2
where Ci is the control analysis containing all available observations, Di is the analysis withholding a set of observations (i.e., RAWS or random), and M is the total number of analysis grid points. Using Fig. 2c (Fig. 2d) for reference, the sensitivity to excluding RAWS (random) observations is derived from the sum of the squares of the differences at each grid point and then further accumulated over all land points in the analysis domain. As noted by Zapotocny et al. (2002, 2005), the sensitivity parameter [Eq. (2)] is not a measure of analysis accuracy; rather, it is simply a measure of the magnitude of analysis change resulting from withholding data.
We introduce an estimate of the areal extent over which each RAWS observation influences the analysis. We define the “average areal influence” of an observation from a withheld dataset as
i1520-0434-23-1-145-e3
where 25 is the area (km2) of each analysis grid box, B is the number of grid points in each analysis with absolute differences greater than a threshold between the control analysis and the analysis obtained excluding observations, and N is the number of observations excluded from that analysis. Arbitrary thresholds were used to define modest (1.0°C and 1.0 m s−1) and more substantive impacts (2.5°C and 2.5 m s−1). This measure helps to mitigate for the nonrandom spatial distribution of the mesonet observations. Even though we randomly select observations each time, the clustering of observing sites evident in Fig. 1 indicates that we are not necessarily removing them equitably in space. This measure weights large sensitivities to isolated observing sites more heavily than similar sensitivities arising from clusters of observations. Returning to Fig. 2c (Fig. 2d), the average areal influence of an excluded RAWS (random) observation using the 2.5°C threshold is the shaded area above that threshold divided by the number of RAWS (random) observations excluded.

The metrics used in this study to examine analysis error, sensitivity, and areal influence are undoubtedly influenced by some of the characteristics of ADAS. As described by Lazarus et al. (2002) and Myrick et al. (2005), the Bratseth method of successive corrections employs two isotropic exponential weighting terms (horizontal and vertical) to define the error covariance of the background field. In relatively flat regions, the error of the background field deduced from the discrepancies between a single observation and the background is spread isotropically away from the observation location over a distance tied to the horizontal background error decorrelation length scale specified in ADAS (note several relatively circular differences between the control and observation-excluded analyses in Figs. 2c and 2d). In the University of Utah version of ADAS, the vertical weighting term sharply reduces the ability of an observation to adjust the background value at grid points located at higher or lower elevations. In addition, Myrick et al. (2005) introduced a third exponential weighting function to the Utah ADAS that limits the influence of an observation innovation to correct the background field across topographic barriers. Thus, the area in mountainous terrain likely to be affected by an isolated observation that deviated substantially from the background is smaller than the area expected if the same innovation was present over an adjacent plain. This approach is intentional since observations in complex terrain may not be representative of the conditions within the immediate grid box (25 km2 area), let alone several grid boxes away.

3. Results

a. Temperature

The rmse of the ADAS temperature analyses at 0000 UTC during the 2003/04 winter season provide a baseline estimate of the analysis error (Fig. 3). Consider first the sets of 10 rmse values denoted by open circles. Each circle indicates the rmse between a set of observations selected randomly and the adjacent analysis values when those observations are excluded from the analysis. The spread in rmse within each set on a particular day is on the order of 0.25°C while the rmse values range between 2.25° and 3°C, depending on the synoptic situation. As summarized in Table 1, the average rmse over the entire season at 0000 UTC (1200 UTC) is 2.6°C (2.9°C).

Each cross in Fig. 3 denotes the rmse between the same sets of randomly selected observations on each day and the adjacent analysis values from the control run. Since the observations are included in the analysis, the rmse values are lower, generally around 2°–2.25°C. The seasonal averages listed in Table 1 for 0000 and 1200 UTC are also correspondingly lower.

If all of the RAWS observations are omitted from the analysis, then the analysis error ranges from 2.5° to 4.2°C (open squares in Fig. 3). Hence, the ADAS analysis quality is affected strongly by the absence of the RAWS observations. Even when the RAWS observations are included in the analysis (filled squares in Fig. 3), the discrepancy between the analysis values and observations is higher relative to that evident at the randomly selected locations.

The seasonal averages in Table 1 illustrate the overall impact of RAWS observations on the ADAS analyses. Using the random-withholding experiments as a baseline, withholding the RAWS observations degraded the accuracy of the analyses by an additional 0.57°C (0.44°C) or 22% (15%) at 0000 UTC (1200 UTC).

The differences (withheld − control) in rmse between each pair of analyses in Fig. 3 are shown in Fig. 4. At 0000 (1200) UTC, analysis accuracy is improved by 0.3°–0.8°C (0.5°–1°C) if the randomly selected observations are added to the control analysis. However, the analysis accuracy is improved by 0.5°–1.7°C if the RAWS observations are used in the analysis. The largest improvements to analysis accuracy from the assimilation of RAWS observations occurred during three upper-level ridging events over the western United States centered on 19 December 2003, 13 January 2004, and 13 February 2004 that were characterized by persistent cold pools in the valleys of the Great Basin. Hence, the ADAS analyses are degraded if the RAWS observations are not available, especially during cold pool events. Greater sensitivity to RAWS observations is usually evident at 1200 UTC than at 0000 UTC, which is likely tied to the larger differences found between the RUC background and the observations at 1200 UTC (see MH06, their Table 1).

As mentioned in section 2, the rmse measure of analysis quality shown in Figs. 3 and 4 is determined from only a small sample of the analysis grid points. The sensitivity [Eq. (2)] of the ADAS temperature analyses to withholding RAWS or randomly selected observations can be computed at every grid point. We begin by examining the temperature sensitivity for each day averaged over all land grid points when the RAWS observations are excluded (squares in Fig. 5). The largest temperature sensitivities are found during the three upper-level ridging episodes mentioned above. The set of 10 analyses in which observations are randomly withheld are cumulatively compared to the control analysis to obtain single sensitivity values at 0000 UTC (filled circles) and 1200 UTC (open circles) in Fig. 5. Hence, M equals 1 625 020 for each analysis time. The ADAS analyses exhibit greater sensitivity to the RAWS observations than when the observations are excluded randomly.

The spatial distribution of temperature sensitivity averaged over the entire winter season for the case when RAWS observations are excluded (Fig. 6a) is closely tied to the geographic distribution of stations in the network (Fig. 1a). Higher temperature sensitivities from RAWS observations are found in the mountains surrounding the Snake River plain in southern Idaho and along the western slopes of the Sierra Nevada in California (Fig. 6a). The spatial distribution in Fig. 6a is also influenced by the location of the RAWS stations relative to other mesonet stations. For example, the high sensitivity analyzed in north-central Montana in Fig. 6a is caused by a cluster of RAWS stations that are distant from other observations (cf. Figs. 1a and 1b). The temperature sensitivity obtained from the set of 10 analyses in which the observations are randomly withheld (Fig. 6b) is small and reflects to a large extent the locations of all of the available stations (Fig. 1). For example, no observations are available in northeastern Arizona and hence the sensitivity is zero in that region.

As discussed in section 2, we seek to estimate the areal extent over which each RAWS observation influences the analysis. Table 2 summarizes the average areal influence of the RAWS observations using 1.0° and 2.5°C thresholds when accumulated over all grid points and all analyses. RAWS observations are likely to influence the analysis by at least 1°C over a larger area at 1200 UTC (712.5 km2) than at 0000 UTC (570 km2) and over smaller areas for a 2.5°C threshold. Randomly withholding observations has less influence on the analysis. Figure 7 examines the synoptic dependence of the average areal influence of the RAWS observations. RAWS observations have greater influence during the three cold pool events discussed earlier than during other periods. Observations that are randomly withheld rarely influence an area greater than a couple of grid points by more than 2.5°C.

b. Wind speed

Following the same procedures used for temperature, a baseline estimate of the wind speed analysis error was derived by comparing sets of 10 analyses in which wind observations were randomly excluded from the control analyses. The spread in rmse within each set was small (order 0.25 m s−1) while the rmse values range between 2 and 4 m s−1, depending on the synoptic situation (not shown). As summarized in Table 3, the average rmse over the entire season at 0000 UTC (1200 UTC) is 2.6 m s−1 (2.8 m s−1). When these observations are assimilated into the control analyses, the accuracy at adjacent grid points increases by approximately 0.7 m s−1.

Wind speed analysis accuracy is improved at the grid points nearest to the randomly withheld observations by 0.4–1.4 m s−1 during the season (Fig. 8). The tendency for greater improvement at 1200 UTC relative to 0000 UTC for temperature (Figs. 4a and 4b) is not apparent for wind speed (Figs. 8a and 8b). Analysis accuracy is improved by 0.5–2.0 m s−1 if the RAWS observations are added to the analysis. The largest improvements in analysis accuracy are found during synoptically active periods. For example, a large upper-level trough crossed the western third of the United States on 2–3 January 2004. The assimilation of the RAWS data improved the analysis by 2 m s−1 on 2 January 2004 at 0000 UTC (Fig. 8a), when strong prefrontal winds influenced much of the interior western United States. In contrast, the smallest improvements were found during the prolonged cold pool in mid-January 2004 when winds tended to be light and variable. Using the random-withholding experiments as a baseline, withholding the RAWS observations degraded the accuracy of the analyses during the 2003/04 winter season by an additional 0.77 m s−1 (1.03 m s−1) or 30% (37%) at 0000 UTC (1200 UTC) (Table 3).

The day-to-day variations in wind speed accuracy evident in Fig. 8 are also apparent in the spatially averaged sensitivity (Fig. 9). The largest sensitivities occur during synoptically active periods (i.e., 2 January 2004) and the smallest ones are found during the mid-January 2004 cold pool event. The analyses are more sensitive to the RAWS observations than to the randomly selected observations and there is little difference in magnitude between the 0000 and 1200 UTC sensitivities.

The spatial distribution of ADAS wind speed sensitivity (Fig. 10a) closely resembles the geographic distribution of the RAWS stations. For example, a ring of wind speed sensitivities greater than 1.5 m s−1 is located along the slopes of the Bighorn Mountains in north-central Wyoming (Fig. 10a) while sensitivities are analyzed to be less than 0.5 m s−1 at the highest elevations in the Bighorn Mountains and over the flatter terrain away from the range. The spatial distribution of wind speed sensitivity to stations that are randomly withheld (Fig. 10b) has features in common with the sensitivity to excluding RAWS since the number of non-RAWS observations that report wind speed is less than the number that report temperature. Thus, a greater proportion of RAWS observations are included in the random dataset for wind speed (Fig. 10b) than for temperature (Fig. 6b).

Following the procedure outlined in section 2, the average areal influence of RAWS observations has been estimated for wind speed sensitivities of 1.0 and 2.5 m s−1 (Table 4; Fig. 11). On average, a RAWS observation has a 2.5 m s−1 influence on the analysis over a 147.5 km2 (197.5 km2) area at 0000 (1200) UTC while the influence of a randomly withheld wind speed observation is much smaller (Table 4). Strong synoptic dependencies from day to day are clearly evident in Fig. 11.

4. Summary

The stations that compose the RAWS network are of benefit for many operational, educational, and research applications. Because of the immediate goals of the agencies that deploy and maintain RAWS stations, they are usually located in remote locations distant from other surface stations. However, even with over 1000 stations in the western United States, the RAWS network by itself or combined with other automated networks does not adequately sample the near-surface atmospheric conditions in the West (see MH06, their Fig. 2). To the best of our knowledge, no other published study has attempted to objectively quantify the impact of RAWS (or any other observing network) in a systematic fashion for surface analyses or other applications. This study is a first attempt at assessing the impact of an existing network on surface objective analyses and it is intended to serve as a foundation for future studies that may help to assess appropriate network designs for expanded or new networks.

Data-withholding experiments have been used to estimate the impact of RAWS observations on surface objective analyses over the western United States. We addressed three specific questions related to the impact of the RAWS observations on the basis of three metrics: accuracy, sensitivity, and areal influence. As discussed in section 2, the magnitude of the estimates of analysis accuracy depend to some extent on our methodology as well as the characteristics of the ADAS system. Hence, we focus on the relative impact of removing RAWS observations versus removing observations randomly.

The accuracy of ADAS temperature analyses at grid points near RAWS observations was improved by 0.8°–1.0°C during the 2003/04 winter season with even greater improvements documented during persistent cold pool events (Fig. 4; Table 1). ADAS wind speed analyses improved by 1.2–1.4 m s−1 near RAWS observations (Fig. 8; Table 3), with the largest improvements observed during high-impact events such as the passage of a large upper-level trough across the West on 2–3 January 2004 (Fig. 8).

When evaluated at all grid points rather than adjacent to observation locations only, the ADAS analyses showed the greatest sensitivities to RAWS temperature observations during upper-level ridging periods when cold pools persisted in the valleys of the interior portions of the western United States (Fig. 5). In contrast, the largest sensitivities to RAWS wind speed observations occurred during transient passages of strong upper-level troughs (Fig. 9). The greatest impacts from the RAWS observations were observed in areas that contained few observations from other mesonets (cf. Figs. 1, 6 and 10).

A measure of the average areal influence of excluded observations was used to examine the spatial impact of RAWS observations. A RAWS observation, on average, influenced a 570 km2 (712.5 km2) area at 0000 (1200) UTC during the 2003/04 winter season based on a modest sensitivity threshold of 1.0°C with substantially smaller areas affected when the sensitivity threshold is increased (Table 2). Similar areal influences were found for wind speed sensitivity thresholds of 1.0 and 2.5 m s−1.

As mentioned in the introduction, attention needs to be placed on developing strategies to modernize or expand current monitoring capabilities. Our results suggest that the “influence” of a single observation in a remote location is likely less than 1000 km2. Hence, the spatial density of stations required in regions of complex terrain may need to be at finer spatial scale if placed on a quasi-uniform grid. Alternative deployment strategies to sample in greater detail a smaller number of microclimate regimes (i.e., valley–crest–valley transects) and not sample nearby locations with similar microclimate regimes may be appropriate. Additional observing system experiments are required to help address such questions. One approach would be to use analysis control runs as “truth” and sample the control analyses either at regular intervals (e.g., every 30 km) or at finer resolution in more limited areas as the “observations” for perturbation experiments.

MH06 and this study should be of interest to forecasters using gridded analyses, such as the RTMA and future AORs, for the development and verification of gridded forecast products. The limitations of existing and future analysis approaches should be recognized. A good analysis requires not only as many good observations as possible but also a good background field. Since the quality of the background field may vary regionally as well as synoptically, observation corrections may be insufficient in some cases to adequately adjust the background. Alternatively, observations may simply not be received in time to be included in the analysis. In addition, all analysis schemes depend on the specification of the observational errors and the specification or prediction of the background errors. The characteristics of those errors influence the resulting analysis and no analysis is likely to match nearby observations exactly on a consistent basis. An additional complicating factor is that the observations may have biases and errors due to instrument design and siting. For example, operational forecasters of the NWS have reported to us that RAWS observations of temperature appear to be higher than expected. Preliminary comparisons of RAWS temperature observations to COOP reports suggest a substantive warm bias in maximum temperature in snow-covered regions (E. Petrescu 2007, personal communication).

Although our estimates of analysis error are relatively high (because of errors in the background grids, the ADAS methodology, and our experimental design), the AORs of the future will have error characteristics that will lead to mismatches between observations and the analysis values. Rather than focusing on the specific magnitude of observations or analysis values (e.g., today’s high or low temperature), forecasters will likely need to examine the observed and analyzed trends as they relate to gridded forecast products.

Acknowledgments

This research was supported by NWS Grant 468004 to the NOAA Cooperative Institute for Regional Prediction at the University of Utah and by NOAA Climate Prediction Program for the Americas Grant GC04-281. This study was made possible in part due to the data made available by the governmental agencies, commercial firms, and educational institutions participating in MesoWest. We wish to acknowledge the Bureau of Land Management for supporting the development and maintenance of software related to MesoWest; however, no funds were directed by that agency to complete this study.

REFERENCES

  • Arnold C. P. Jr., , and Dey C. H. , 1986: Observing-system simulation experiments: Past, present, and future. Bull. Amer. Meteor. Soc., 67 , 687695.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., 1997: Atmospheric observations and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan, 75 , 111130.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and Coauthors, 2004: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132 , 495518.

  • Bratseth, A. M., 1986: Statistical interpolation by means of successive corrections. Tellus, 38A , 439447.

  • Cardinali, C., Isaksen L. , and Andersson E. , 2003: Use and impact of automated aircraft data in a global 4DVAR data assimilation system. Mon. Wea. Rev., 131 , 18651877.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crawford, K. C., and Essenberg G. R. , 2006: COOP modernization: NOAA’s environmental real-time observation network in New England, the Southeast, and addressing NIDIS in the West. Preprints, 22d Int. Conf. on Interactive Information Processing Systems for Meteorology, Oceanography, and Hydrology, and 10th Symp. on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface, Atlanta, GA, Amer. Meteor. Soc., J5.9. [Available online at http://ams.confex.com/ams/pdfpapers/102368.pdf.].

  • Glahn, H. R., and Ruth D. P. , 2003: The new digital forecast database of the National Weather Service. Bull. Amer. Meteor. Soc., 84 , 195201.

  • Horel, J. D., and Colman B. R. , 2005: Real-time and retrospective mesoscale objective analyses. Bull. Amer. Meteor. Soc., 86 , 14771480.

  • Horel, J. D., and Coauthors, 2002: Mesowest: Cooperative mesonets in the western United States. Bull. Amer. Meteor. Soc., 83 , 211226.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 341 pp.

  • Lazarus, S. M., Ciliberti C. M. , Horel J. D. , and Brewster K. A. , 2002: Near-real-time applications of a mesoscale analysis system to complex terrain. Wea. Forecasting, 17 , 9711000.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Myrick, D. T., and Horel J. D. , 2006: Verification of surface temperature forecasts from the National Digital Forecast Database over the western United States. Wea. Forecasting, 21 , 869892.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Myrick, D. T., Horel J. D. , and Lazarus S. M. , 2005: Local adjustment of the background error correlation for surface analyses over complex terrain. Wea. Forecasting, 20 , 149160.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Seaman, R. S., and Hutchinson M. F. , 1985: Comparative real data tests of some objective analysis methods by withholding observations. Aust. Meteor. Mag., 33 , 3746.

    • Search Google Scholar
    • Export Citation
  • Serreze, M., Clark M. P. , Armstrong R. L. , McGinnis D. A. , and Pulwarty R. S. , 1999: Characteristics of the western United States snowpack from snowpack telemetry (SNOTEL) data. Water Resour. Res., 35 , 21452160.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Western Governors’ Association, cited. 2004: Creating a drought early warning system for the 21st century–The National Integrated Drought Information System. [Available online at http://www.westgov.org/wga/publicat/nidis.pdf.].

  • Xue, M., Droegemeier K. K. , and Wong V. , 2000: The Advanced Regional Prediction System (ARPS)—A multiscale nonhydrostatic atmospheric simulation and prediction model. Part I: Model dynamics and verification. Meteor. Atmos. Phys., 75 , 161193.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., and Coauthors, 2001: The Advanced Regional Prediction System (ARPS)—A multiscale nonhydrostatic atmospheric simulation and prediction tool. Part II: Model physics and applications. Meteor. Atmos. Phys., 76 , 143165.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., Wang D. , Gao J. , Brewster K. , and Droegemeier K. K. , 2003: The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation. Meteor. Atmos. Phys., 82 , 139170.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zachariassen, J., Zeller K. F. , Nikolov N. , and McClelland T. , 2003: A review of the Forest Service Remote Automated Weather Station (RAWS) network. Gen. Tech. Rep. RMRS-GTR-119, Rocky Mountain Research Station, U.S. Forest Service, Fort Collins, CO, 153 pp. [Available online at http://www.fs.fed.us/rm/pubs/rmrs_gtr119.pdf.].

    • Crossref
    • Export Citation
  • Zapotocny, T. H., and Coauthors, 2000: A case study of the sensitivity of the Eta Data Assimilation System. Wea. Forecasting, 15 , 603621.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zapotocny, T. H., Menzel W. P. , Nelson J. P. III, and Jung J. A. , 2002: An impact study of five remotely sensed and five in situ data types in the Eta Data Assimilation System. Wea. Forecasting, 17 , 263285.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zapotocny, T. H., Menzel W. P. , Jung J. A. , and Nelson J. P. III, 2005: A four-season impact study of rawinsonde, GOES, and POES data in the Eta Data Assimilation System. Part II: Contribution of the components. Wea. Forecasting, 20 , 178198.

    • Crossref
    • Search Google Scholar
    • Export Citation

Fig. 1.
Fig. 1.

Spatial distribution of (a) RAWS and (b) non-RAWS surface observations at 0000 UTC 16 Jan 2004.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 2.
Fig. 2.

Case study valid at 0000 UTC 16 Jan 2004: (a) ADAS temperature (°C) analysis, (b) terrain elevation (m) and surface observation locations (black diamonds) used in (a), (c) difference (°C) between ADAS analysis in (a) and an ADAS analysis that withholds RAWS observations, and (d) difference (°C) between ADAS analysis in (a) and an ADAS analysis that randomly withholds observations. The Snake River plain and Uinta Basin are denoted by S and U, respectively, in (b). The locations of withheld observations in (c) and (d) are denoted by black diamonds.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 3.
Fig. 3.

Rmse of 2003/04 winter season 0000 UTC ADAS surface temperature analyses that withhold and assimilate RAWS observations and randomly selected sets of observations.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 4.
Fig. 4.

Difference in rmse of ADAS temperature analyses that withhold and include RAWS and randomly selected observations at (a) 0000 and (b) 1200 UTC during the 2003/04 winter season.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 5.
Fig. 5.

Spatially averaged sensitivity (°C) for ADAS surface temperature during the 2003/04 winter season at 0000 and 1200 UTC from RAWS observations and randomly withheld observations.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 6.
Fig. 6.

Spatial distributions of time-averaged sensitivity (°C) for ADAS surface temperature valid at 0000 UTC during the 2003/04 winter season from (a) RAWS observations and (b) randomly withheld observations.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 7.
Fig. 7.

Average areal influence (km2) for temperature sensitivity thresholds greater than 1.0° and 2.5°C at 0000 UTC on each day during the 2003/04 winter season. See text for further details.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 8.
Fig. 8.

Same as in Fig. 4 but for wind speed (m s−1).

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 9.
Fig. 9.

Same as in Fig. 5 but for wind speed (m s−1).

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 10.
Fig. 10.

Same as in Fig. 6 but for wind speed (m s−1).

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Fig. 11.
Fig. 11.

Same as in Fig. 7 but for wind speed sensitivity thresholds greater than 1.0 and 2.5 m s−1.

Citation: Weather and Forecasting 23, 1; 10.1175/2007WAF2006074.1

Table 1.

Average rmse (°C) of 2003/04 winter season ADAS surface temperature analyses that withhold and assimilate RAWS observations and randomly selected sets of observations.

Table 1.
Table 2.

Average areal influence (km2) for temperature sensitivity thresholds greater than 1.0° and 2.5°C during the 2003/04 winter season.

Table 2.
Table 3.

Same as in Table 1 but for wind speed (m s−1).

Table 3.
Table 4.

Same as in Table 2 but for wind speed sensitivities thresholds greater than and 2.5 m s−1.

Table 4.

1

The background supplied by the Rapid Update Cycle analysis was so much colder than that observed in the mountains along the Utah–Idaho border that positive temperature corrections to the background field at a few RAWS and other sites in that region were rejected during the quality control step, leaving a degraded (much too cold) ADAS analysis there in the control and sensitivity experiments. This situation was rare during the season as a whole.

2

Many data-denial studies withhold typically 10% of the observations. Hence, we overestimate analysis error as fewer observations are available to define each analysis. An alternative (but very computationally expensive) data-denial approach is to repeat each hour’s analysis thousands of times omitting one observation each time. The rmse would then lie somewhere between the conservative approach used here (where 30% of the stations are removed) and the dependent estimate obtained from the control analysis.

Save
  • Arnold C. P. Jr., , and Dey C. H. , 1986: Observing-system simulation experiments: Past, present, and future. Bull. Amer. Meteor. Soc., 67 , 687695.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., 1997: Atmospheric observations and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan, 75 , 111130.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and Coauthors, 2004: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132 , 495518.

  • Bratseth, A. M., 1986: Statistical interpolation by means of successive corrections. Tellus, 38A , 439447.

  • Cardinali, C., Isaksen L. , and Andersson E. , 2003: Use and impact of automated aircraft data in a global 4DVAR data assimilation system. Mon. Wea. Rev., 131 , 18651877.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crawford, K. C., and Essenberg G. R. , 2006: COOP modernization: NOAA’s environmental real-time observation network in New England, the Southeast, and addressing NIDIS in the West. Preprints, 22d Int. Conf. on Interactive Information Processing Systems for Meteorology, Oceanography, and Hydrology, and 10th Symp. on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface, Atlanta, GA, Amer. Meteor. Soc., J5.9. [Available online at http://ams.confex.com/ams/pdfpapers/102368.pdf.].

  • Glahn, H. R., and Ruth D. P. , 2003: The new digital forecast database of the National Weather Service. Bull. Amer. Meteor. Soc., 84 , 195201.

  • Horel, J. D., and Colman B. R. , 2005: Real-time and retrospective mesoscale objective analyses. Bull. Amer. Meteor. Soc., 86 , 14771480.

  • Horel, J. D., and Coauthors, 2002: Mesowest: Cooperative mesonets in the western United States. Bull. Amer. Meteor. Soc., 83 , 211226.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 341 pp.

  • Lazarus, S. M., Ciliberti C. M. , Horel J. D. , and Brewster K. A. , 2002: Near-real-time applications of a mesoscale analysis system to complex terrain. Wea. Forecasting, 17 , 9711000.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Myrick, D. T., and Horel J. D. , 2006: Verification of surface temperature forecasts from the National Digital Forecast Database over the western United States. Wea. Forecasting, 21 , 869892.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Myrick, D. T., Horel J. D. , and Lazarus S. M. , 2005: Local adjustment of the background error correlation for surface analyses over complex terrain. Wea. Forecasting, 20 , 149160.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Seaman, R. S., and Hutchinson M. F. , 1985: Comparative real data tests of some objective analysis methods by withholding observations. Aust. Meteor. Mag., 33 , 3746.

    • Search Google Scholar
    • Export Citation
  • Serreze, M., Clark M. P. , Armstrong R. L. , McGinnis D. A. , and Pulwarty R. S. , 1999: Characteristics of the western United States snowpack from snowpack telemetry (SNOTEL) data. Water Resour. Res., 35 , 21452160.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Western Governors’ Association, cited. 2004: Creating a drought early warning system for the 21st century–The National Integrated Drought Information System. [Available online at http://www.westgov.org/wga/publicat/nidis.pdf.].

  • Xue, M., Droegemeier K. K. , and Wong V. , 2000: The Advanced Regional Prediction System (ARPS)—A multiscale nonhydrostatic atmospheric simulation and prediction model. Part I: Model dynamics and verification. Meteor. Atmos. Phys., 75 , 161193.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., and Coauthors, 2001: The Advanced Regional Prediction System (ARPS)—A multiscale nonhydrostatic atmospheric simulation and prediction tool. Part II: Model physics and applications. Meteor. Atmos. Phys., 76 , 143165.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., Wang D. , Gao J. , Brewster K. , and Droegemeier K. K. , 2003: The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation. Meteor. Atmos. Phys., 82 , 139170.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zachariassen, J., Zeller K. F. , Nikolov N. , and McClelland T. , 2003: A review of the Forest Service Remote Automated Weather Station (RAWS) network. Gen. Tech. Rep. RMRS-GTR-119, Rocky Mountain Research Station, U.S. Forest Service, Fort Collins, CO, 153 pp. [Available online at http://www.fs.fed.us/rm/pubs/rmrs_gtr119.pdf.].

    • Crossref
    • Export Citation
  • Zapotocny, T. H., and Coauthors, 2000: A case study of the sensitivity of the Eta Data Assimilation System. Wea. Forecasting, 15 , 603621.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zapotocny, T. H., Menzel W. P. , Nelson J. P. III, and Jung J. A. , 2002: An impact study of five remotely sensed and five in situ data types in the Eta Data Assimilation System. Wea. Forecasting, 17 , 263285.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zapotocny, T. H., Menzel W. P. , Jung J. A. , and Nelson J. P. III, 2005: A four-season impact study of rawinsonde, GOES, and POES data in the Eta Data Assimilation System. Part II: Contribution of the components. Wea. Forecasting, 20 , 178198.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Spatial distribution of (a) RAWS and (b) non-RAWS surface observations at 0000 UTC 16 Jan 2004.

  • Fig. 2.

    Case study valid at 0000 UTC 16 Jan 2004: (a) ADAS temperature (°C) analysis, (b) terrain elevation (m) and surface observation locations (black diamonds) used in (a), (c) difference (°C) between ADAS analysis in (a) and an ADAS analysis that withholds RAWS observations, and (d) difference (°C) between ADAS analysis in (a) and an ADAS analysis that randomly withholds observations. The Snake River plain and Uinta Basin are denoted by S and U, respectively, in (b). The locations of withheld observations in (c) and (d) are denoted by black diamonds.

  • Fig. 3.

    Rmse of 2003/04 winter season 0000 UTC ADAS surface temperature analyses that withhold and assimilate RAWS observations and randomly selected sets of observations.

  • Fig. 4.

    Difference in rmse of ADAS temperature analyses that withhold and include RAWS and randomly selected observations at (a) 0000 and (b) 1200 UTC during the 2003/04 winter season.

  • Fig. 5.

    Spatially averaged sensitivity (°C) for ADAS surface temperature during the 2003/04 winter season at 0000 and 1200 UTC from RAWS observations and randomly withheld observations.

  • Fig. 6.

    Spatial distributions of time-averaged sensitivity (°C) for ADAS surface temperature valid at 0000 UTC during the 2003/04 winter season from (a) RAWS observations and (b) randomly withheld observations.

  • Fig. 7.

    Average areal influence (km2) for temperature sensitivity thresholds greater than 1.0° and 2.5°C at 0000 UTC on each day during the 2003/04 winter season. See text for further details.

  • Fig. 8.

    Same as in Fig. 4 but for wind speed (m s−1).

  • Fig. 9.

    Same as in Fig. 5 but for wind speed (m s−1).

  • Fig. 10.

    Same as in Fig. 6 but for wind speed (m s−1).

  • Fig. 11.

    Same as in Fig. 7 but for wind speed sensitivity thresholds greater than 1.0 and 2.5 m s−1.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 175 73 16
PDF Downloads 82 34 13