• Andrews, S., , and M. Cops, 2009: Final report: Vehicle infrastructure integration proof of concept technical description–Vehicle. FHWA Tech. Rep. FHWA-JPO-09-017, 96 pp.

  • AutoTap, cited 2011: The OBD II home page. [Available online at http://www.obdii.com.]

  • Bell, B., , P. O. G. Heppner, , A. Orrego, , and D. Helms, 2011: The Mobile Platform Environmental Data (MoPED) System: Providing mobile environmental data to the National Mesonet. Preprints, 27th Conf. on Interactive Systems (IIPS) for Meteorology, Oceanography, and Hydrology, Seattle, WA, Amer. Meteor. Soc., 2A.4. [Available online at http://ams.confex.com/ams/91Annual/webprogram/Paper184255.html.]

  • Bouilloud, L., and Coauthors, 2009: Road surface condition forecasting in France. J. Appl. Meteor. Climatol., 48, 25132527.

  • Chapman, M., , S. Drobot, , T. Jensen, , C. Johansen, , W. Mahoney, III, , P. Pisano, , and B. McKeever, 2010: Using vehicle probe data to diagnose road weather conditions—Results from the Detroit IntelliDrive(SM) Field Study. Transp. Res. Rec., 2169, 116127.

    • Search Google Scholar
    • Export Citation
  • Crevier, L.-P., , and Y. Delage, 2001: METRo: A new model for road-condition forecasting in Canada. J. Appl. Meteor., 40, 20262037.

  • Dion, F., , and R. Robinson, 2010: VII Data Use Analysis and Processing (DUAP) final project report (phase II). UMTRI Tech. Rep. UMTRI-2010-28, 267 pp. [Available online at http://deepblue.lib.umich.edu/bitstream/2027.42/78569/1/102726.pdf.]

  • Drobot, S. D., , A. Anderson, , M. C. Chapman, , and C. Johansen, 2009a: Vehicle standards. NCAR Tech. Rep., 15 pp.

  • Drobot, S. D., and Coauthors, 2009b: IntelliDrive (SM) road weather research & development—The Vehicle Data Translator. Proc. Intelligent Transportation Society of America Annual Conf., National Harbor, MD, ITSA, 13 pp. [Available online at http://www.ral.ucar.edu/projects/intellidrive/publications/ITSA2009_Drobot_paper.pdf.]

  • Drobot, S. D., , M. Chapman, , P. A. Pisano, , and B. B. McKeever, 2010: Using vehicles as mobile weather platforms. Data and Mobility: Transforming Information into Intelligent Traffic and Transportation Services, Springer, 203–214.

  • Drobot, S. D., , M. Chapman, , B. Lambi, , G. Wiener, , and A. Anderson, 2011: The Vehicle Data Translator V3.0 System Description. FHWA Tech. Rep. FHWA-JPO-11-127, 45 pp.

  • Kandarpa, R., and Coauthors, 2009: Final report: Vehicle infrastructure integration proof-of-concept results and findings—Infrastructure. FHWA Tech. Rep. FHWA-JPO-09-057, 194 pp.

  • Mahoney, B., , S. Drobot, , P. Pisano, , B. McKeever, , and J. O’Sullivan, 2010: Vehicles as mobile weather observation systems. Bull. Amer. Meteor. Soc., 91, 11791182.

    • Search Google Scholar
    • Export Citation
  • Manfredi, J., , T. Walters, , G. Wilke, , L. Osborne, , R. Hart, , T. Incrocci, , and T. Schmitt, 2005: Road Weather Information System Environmental Sensor Station siting guidelines. FHWA Tech. Rep. FHWA-HOP-05-026, 46 pp.

  • National Weather Service, cited 2010: Weather fatalities. [Available online at http://www.weather.gov/os/hazstats.shtml.]

  • Noblis, cited 2010: Eleven-year averages from 1995 to 2005 analyzed by Noblis, based on NHTSA data. [Available online at http://ops.fhwa.dot.gov/weather/q1_roadimpact.htm.]

  • Pisano, P. A., , A. D. Stern, , and W. P. Mahoney III, 2005: The U.S. Federal Highway Administration Winter Road Maintenance Decision Support System (MDSS) Project: Overview and results. Preprints, 21st Int. Conf. on Interactive Information Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology, San Diego, CA, Amer. Meteor. Soc., 6.5. [Available online at http://ams.confex.com/ams/pdfpapers/83959.pdf.]

  • Pisano, P. A., , J. S. Pol, , A. D. Stern, , B. C. Boyce, , and J. K. Garrett, 2007: Evolution of the U.S. Department of Transportation Clarus initiative: Project status and future plans. Preprints, 23rd Conf. on Interactive Systems (IIPS) for Meteorology, Oceanography, and Hydrology, San Antonio, TX, Amer. Meteor. Soc., 4A.5. [Available online at http://ams.confex.com/ams/pdfpapers/119018.pdf.]

  • SAE International, 2009: On-Board Diagnostics for Light and Medium Duty Vehicles Standards Manual–2010 Edition. SAE International, 969 pp.

  • Sass, B. H., 1992: A numerical model for prediction of road temperature and ice. J. Appl. Meteor., 31, 14991506.

  • Stern, A. D., , V. P. Shah, , K. J. Biesecker, , C. Yeung, , P. A. Pisano, , and J. S. Pol, 2007: Vehicles as mobile sensing platforms for meteorological observations: A first look. Preprints, 23rd Conf. on Interactive Systems (IIPS) for Meteorology, Oceanography, and Hydrology, San Antonia, TX, Amer. Meteor. Soc., 4A.6. [Available online at http://ams.confex.com/ams/pdfpapers/118986.pdf.]

  • Transportation Research Board, 2009: Implementing the results of the Second Strategic Highway Research Program: Saving lives, reducing congestion, improving quality of life. Transportation Research Board Special Rep. 296, 169 pp.

  • USDOT, cited 2011: Intelligent transportation systems. Research and Innovative Technology Administration, Department of Transportation. [Available online at http://www.its.dot.gov.]

  • Vaisala, cited 2010a: Vaisala Surface Patrol HD Pavement Temperature and Humidity Sensor Series DSP200. [Available online at http://www.vaisala.com/en/products/surfacesensors/Pages/DSP211.aspx.]

  • Vaisala, cited 2010b: Vaisala Weather Transmitter WXT520. [Available online at http://www.vaisala.com/en/products/multiweathersensors/Pages/WXT520.aspx.]

  • View in gallery

    Map of the DTE10 area. Black lines outline counties, light-gray lines are Interstate Highways, and thick dark-gray circles (so closely spaced as to form lines) represent locations of DTE10 vehicle observations.

  • View in gallery

    Percentage of air temperature (dark gray) and atmospheric pressure (light gray) observations passing each of the three QCh tests as well as the percentage of observations assigned a minimum of low and high confidence.

  • View in gallery

    Percentage of air temperature (dark gray) and atmospheric pressure (light gray) observations passing the NST stratified by ambient wind speed as measured by the WXT520.

  • View in gallery

    As in Fig. 3, but stratified by precipitation condition, which is inferred from windshield-wiper status.

  • View in gallery

    As in Fig. 3, but stratified by vehicle. Ford Edges are “e” vehicles, and Jeep Grand Cherokees are “p” vehicles.

  • View in gallery

    Overall statistics for low-confidence (dark gray) and high-confidence (light gray) (left) air temperature and (right) atmospheric pressure observations in comparison with the WXT520.

  • View in gallery

    Vehicle temperature bias for low confidence (dark gray) and high confidence (light gray) in comparison with WXT520 temperature, stratified by ambient air temperature.

  • View in gallery

    As in Fig. 7, but stratified by ambient wind direction.

  • View in gallery

    As in Fig. 7, but stratified by date.

  • View in gallery

    Vehicle (top) temperature and (bottom) pressure low-confidence (dark gray) and high-confidence (light gray) bias in comparison with WXT520 temperature, stratified by vehicle. Ford Edges are “e” vehicles, and Jeep Grand Cherokees are “p” vehicles.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 79 79 4
PDF Downloads 51 51 6

Quality of Mobile Air Temperature and Atmospheric Pressure Observations from the 2010 Development Test Environment Experiment

View More View Less
  • 1 National Center for Atmospheric Research,* Boulder, Colorado
  • | 2 Federal Highway Administration, Washington, D.C.
© Get Permissions
Full access

Abstract

The 2010 Development Test Environment Experiment (DTE10) took place from 28 January to 29 March 2010 in the Detroit, Michigan, metropolitan area for the purposes of collecting and evaluating mobile data from vehicles. To examine the quality of these data, over 239 000 air temperature and atmospheric pressure observations were obtained from nine vehicles and were compared with a weather station set up at the testing site. The observations from the vehicles were first run through the NCAR Vehicle Data Translator (VDT). As part of the VDT, quality-checking (QCh) tests were applied; pass rates from these tests were examined and were stratified by meteorological and nonmeteorological factors. Statistics were then calculated for air temperature and atmospheric pressure in comparison with the weather station, and the effects of different meteorological and nonmeteorological factors on the statistics were examined. Overall, temperature measurements showed consistent agreement with the weather station, and there was little impact from the QCh process or stratifications—a result that demonstrated the feasibility of collecting mobile temperature observations from vehicles. Atmospheric pressure observations were less well matched with surface validation, the degree of which varied with the make and model of vehicle. Therefore, more work must be done to improve the quality of these observations if atmospheric pressure from vehicles is to be useful.

The National Center for Atmospheric Research is sponsored by the National Science Foundation.

Corresponding author address: Amanda R. S. Anderson, National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307. E-mail: aander@ucar.edu

Abstract

The 2010 Development Test Environment Experiment (DTE10) took place from 28 January to 29 March 2010 in the Detroit, Michigan, metropolitan area for the purposes of collecting and evaluating mobile data from vehicles. To examine the quality of these data, over 239 000 air temperature and atmospheric pressure observations were obtained from nine vehicles and were compared with a weather station set up at the testing site. The observations from the vehicles were first run through the NCAR Vehicle Data Translator (VDT). As part of the VDT, quality-checking (QCh) tests were applied; pass rates from these tests were examined and were stratified by meteorological and nonmeteorological factors. Statistics were then calculated for air temperature and atmospheric pressure in comparison with the weather station, and the effects of different meteorological and nonmeteorological factors on the statistics were examined. Overall, temperature measurements showed consistent agreement with the weather station, and there was little impact from the QCh process or stratifications—a result that demonstrated the feasibility of collecting mobile temperature observations from vehicles. Atmospheric pressure observations were less well matched with surface validation, the degree of which varied with the make and model of vehicle. Therefore, more work must be done to improve the quality of these observations if atmospheric pressure from vehicles is to be useful.

The National Center for Atmospheric Research is sponsored by the National Science Foundation.

Corresponding author address: Amanda R. S. Anderson, National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307. E-mail: aander@ucar.edu

1. Introduction

On average, there are approximately 7100 weather-related vehicle fatalities in the United States each year (Noblis 2010). By contrast, there are an average of 574 deaths per year resulting from heat, floods, tornadoes, wind, lightning, winter weather, cold, and hurricanes combined (National Weather Service 2010). The large number of vehicle fatalities involving adverse weather necessitates work on analyzing weather conditions along roadways to provide travelers with the information necessary to reduce the likelihood of a crash. The bulk of this work has, to this point, involved collecting fixed observations along the roadways and developing models for prediction of roadway weather. Using sensor data from vehicles themselves, including passenger vehicles, could revolutionize the delivery of road weather information to transportation decision makers and travelers by moving beyond the current fixed sensors and numerical models to provide real-time, mobile data from thousands of vehicles along the roadway.

There has been considerable investment and research involving observations of roadway weather conditions. In particular, Road Weather Information System Environmental Sensor Stations (RWIS ESS; Manfredi et al. 2005) have been set up in most states to monitor conditions along the roadway, including pavement temperature and cameras to allow users to view weather conditions in near–real time. The Clarus initiative (Pisano et al. 2007) was established to reduce the impacts of adverse weather on surface transportation by assimilating, quality controlling, and disseminating road weather observations from RWIS ESS.

Much of the published research regarding road weather conditions has focused on numerical modeling. Sass (1992) presented a numerical model for predicting road temperature and ice, which found that the best results were obtained with a detailed low-level profile of temperature and humidity and careful use of road and atmospheric initial conditions. The Model of the Environment and Temperature of Roads (METRo; Crevier and Delage 2001) input included observations along the roadway from RWIS ESS. This model produced automated forecasts that “verified” almost as well as manual forecasts. A model was also designed in France to predict the snow layer over road surfaces throughout the country (Bouilloud et al. 2009), producing satisfactory results using only numerical weather forecasts as input in an effort to decrease the costs of operating the model on a scale that included not just major highways but all roadways. The Maintenance Decision and Support System (MDSS; Pisano et al. 2005) makes use of several models as well as surface observations (including RWIS ESS) to produce forecasts along roadways and maintenance-suggestions/scenarios to aid decision makers in snow and ice removal.

Fixed roadway sensors and modeling efforts, such as those listed above, have been extremely valuable in efforts to improve safety, decrease costs, and increase efficient road maintenance as related to adverse weather conditions. These efforts will be even further advanced through the use of a new way of collecting roadway observations from vehicles. Data management systems such as Clarus could increase their data pool, and modeling applications such as the ones listed above, as well as non-road-specific modeling efforts, could be aided by using observations from vehicles along the roadways. Such data could also aid in the production of real-time diagnoses of weather conditions and hazards along the roadways (Mahoney et al. 2010). The concept of using vehicles to measure weather observations is being explored through a U.S. Department of Transportation (USDOT) initiative aimed at improving transportation safety, mobility, and environmental impact through the use of intelligent systems (USDOT 2011). Mahoney et al. (2010) described the role of passenger vehicles in observing and disseminating weather data across wireless networks and the benefits of such data, including the potential of mobile data to provide observations over a much larger scale than is achievable with fixed sensors as well as analyzing and predicting weather both on the roadway scale and the larger scale. Unlike traditional weather sensors, the observations collected from vehicles are not limited to weather variables (e.g., air temperature and atmospheric pressure) but can also include vehicle-specific observations such as wiper status and vehicle speed, which can be useful in deriving weather and pavement conditions (e.g., Drobot et al. 2010).

One of the requirements for the success of the use of vehicle-based mobile data is that they must be of a good quality. To assess the quality of available vehicle sensor data, the 2010 Development Test Environment Experiment (DTE10) was run to provide a dataset for examination. This paper presents an analysis of two of the vehicle observation types from this experiment for which a surface validation comparison was available: air temperature and atmospheric pressure.

2. The 2010 Development Test Environment Experiment

From 28 January 2010 to 29 March 2010, data were collected over the Development Test Environment (DTE) in Novi, Michigan, a suburb of Detroit, Michigan [for an overview of the DTE, see Andrews and Cops (2009)]. A map of the DTE10 area, along with the locations of all vehicle observations that were collected, is given in Fig. 1. Data were collected from nine vehicles (three Ford Edges and six Jeep Grand Cherokees) through their onboard equipment (OBE; Andrews and Cops 2009). The OBE consists of a computer used within a vehicle to allow communication with the vehicle controller area network (CAN-bus) and logging of the data. The CAN-bus is a network that connects the different modules in a vehicle, similar to a computer network, and various applications within the vehicle use this network, such as the updated On-Board Diagnostics (OBD-II; SAE International 2009; AutoTap 2011). Data logged by the OBE may be obtained either by downloading the data directly from the OBE computer or by transmitting the data via radio or other wireless network. In the DTE, data that are wirelessly transmitted from the vehicles can be received from several roadside equipment (RSE) receivers installed in the test bed. In addition to the air temperature and pressure analyzed in this study, vehicle data obtained from the CAN-bus such as wiper status and traction-control status can be used to determine important road weather conditions such as the presence of precipitation along the roadway or slick pavement conditions. For more information on implementing vehicle data for meteorological purposes, see Mahoney et al. (2010) and Drobot et al. (2010).

Fig. 1.
Fig. 1.

Map of the DTE10 area. Black lines outline counties, light-gray lines are Interstate Highways, and thick dark-gray circles (so closely spaced as to form lines) represent locations of DTE10 vehicle observations.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

For DTE10, the logged observations were collected directly from the OBE rather than wirelessly from RSE receivers in the test area. Data collected from the RSEs were not as complete as those collected directly from the OBE because of outages and scheduled maintenance of the network during testing. Table 1 lists the observations that were available from the vehicles.

Table 1.

List of vehicle observations and corresponding bounds for the anticipated-range test.

Table 1.

In addition to the vehicle data, data from two weather instruments were collected during testing. The first instrument was the Vaisala, Inc., “Surface Patrol HD” sensor (Vaisala 2010a). This unit is a portable sensor that measures temperature, dewpoint, and surface temperature. The Surface Patrol HD was mounted to the left-front quarter panel of each vehicle and provided observations collocated in time and space and logged by each vehicle’s OBE. The second instrument was the Vaisala “WXT520,” a lightweight weather station capable of measuring wind speed and direction, liquid precipitation, atmospheric pressure, air temperature, and relative humidity (Vaisala 2010b). The WXT520 was set up near the test facility in Livonia, Michigan, to provide more representative observations than were possible with the Automated Surface Observing System (ASOS) located at the Detroit Metropolitan Wayne County Airport (KDTW), which is approximately 45 km from the DTE.

Vehicles were driven on predetermined routes in the DTE for each testing day, with observations from the vehicles, Surface Patrol HDs, and WXT520 being collected. Because of occasional maintenance and equipment problems, not all nine vehicles were available on all testing days, and the WXT520 was not collecting data for the final two test hours of 1 February or for all of 22 February. Effort was made to test in a variety of conditions including cold temperatures, clear conditions, heavy-snow events, rain events, congestion, and rural routes. In the mornings before testing, at the noon lunch break, and at the end of the testing day, data were collected from the vehicles through the OBE. The morning collection was done to test whether the vehicles were recording data properly, and the noon and end-of-day logs contained testing data. The raw logs were parsed into comma-delimited files to put the data into an easy-to-read format. The WXT520 observations were recorded in American Standard Code for Information Interchange (ASCII) format on a computer hooked up to the sensor itself. A summary of DTE10 testing is found in Table 2. Note that the WXT520 was not available on 22 February, and 25 March was off-road testing conducted with only four vehicles. Therefore, these dates were excluded from the analysis in this paper.

Table 2.

Overview of testing days of the 2010 Development Test Environment Experiment.

Table 2.

3. The Vehicle Data Translator and quality-checking tests

When collected in an operational, real-time environment, vehicle observations are envisioned to run through the Vehicle Data Translator (VDT) and be assigned quality-checking (QCh) flags. The DTE10 dataset was processed with the VDT and QCh tests to allow the analysis of data quality to be as representative of a real-time system as possible. An overview of the VDT and the QCh tests used in the data-quality analysis is given in this section.

a. The Vehicle Data Translator

The VDT (Drobot et al. 2009b) is a modular framework designed to ingest observations from vehicles, combine them with ancillary data such as radar reflectivity, flag the quality of the vehicle observations, and output the resulting data. Additional modules in the system compute statistics across road segments (e.g., mean temperature) and assessments of weather conditions on the corresponding road segment or grid point (e.g., slick pavement). In an operational setting, millions of vehicles will act as probes by rapidly sending observations to be processed by the system.

The VDT first ingests and parses the observations received from the vehicles through the mobile data parser and sorter module, where they are assigned to user-defined segments of roadway [default of 1-mi (~1.6 km) segment length over a 5-min period], and performs QCh on these observations within the QCh module (see section 3b for a description of the QCh tests). Ancillary data, such as observations from weather stations that are fixed in location, are ingested by the “ancillary input data converters and ingesters” module and aid in performing the QCh. The statistics module provides statistical output for the user-defined road segments, such as maximum, minimum, and mean air temperature on the segment. The vehicle and ancillary data are then used to perform an assessment of weather conditions along either the road segment or at a grid point in the inference module. These include precipitation, pavement condition (e.g., wet or snow covered), and visibility. The output from these modules is provided to the user through the output data handler. Both vehicle point data with QCh flags and the road-segment statistics and weather assessments are output. Gridded data such as radar reflectivity or gridded weather assessments are also available.

The QCh tests included for the analysis in this paper are presented in section 3b. Vehicle point data with associated QCh flags were used for the analysis of quality in sections 4 and 5. An option was added to the VDT to ingest the WXT520 data and objectively assign the WXT520 observation closest in time (with a maximum difference of 5 min allowed) to the corresponding vehicle observation.

b. Quality-checking tests

Within the VDT, three QCh tests were performed for this study: the anticipated-range test (ART), the neighboring-vehicle test (NVT), and the neighboring-station test (NST). The ART looks for observations that fall outside the known sensor range according to hardware specifications. This test is useful in identifying observations that are likely not possible on the given sensor, particularly if the sensor uses an unusual value for identifying missing observations (e.g., 167.3 instead of −999). The current bounds for the ART are given in Table 1.

The NVT compares the given vehicle observation with those of neighboring vehicles on the road segment. To be specific, the standard deviation and the mean of the observations along a 1.6-km (1-mi) road segment during a 5-min snapshot are taken, and then each observation is checked to assure that it falls within a standard deviation, multiplied by a constant, of the mean of the road segment. The VDT currently uses a constant of 2.5, meaning the observations are checked to see whether they fall within 2.5 standard deviations of the mean of the road segment. This value was chosen on the basis of sensitivity tests performed with DTE proof-of-concept (PoC) data (Kandarpa et al. 2009). The NVT was run using the PoC data and various standard deviation values, and from the results it was determined that 2.0 was too strict of a threshold while larger values were too lenient. Other possible thresholds were further examined using the DTE10 data, and, although exact details of this sensitivity test are outside the scope of this paper, the particular bounds were not found to have a large effect on pass rates. With the DTE10 data, this test is much less discriminating than in larger datasets because only nine vehicles, or sometimes fewer, were present in the entire testing area. There is currently no minimum number of observations per road segment required to determine whether the standard deviation is meaningful enough for a QCh test. Such a threshold will be an added requirement in the future.

The NST compares the data with those of the closest surface ASOS station in space and time. The nearest stations are defined as being within a 50-km radius and 5-min observation time from the data point. If more than one ASOS station meets these criteria, a mean of those observations is taken. A temperature observation passes if it is within 2°C of the ASOS station observation (although other thresholds were assessed as described in the previous paragraph). It is important to note that in many cases the temperature measured by a vehicle may be (correctly) more than 2°C different from the ASOS observations, particularly if the observations are far apart in space or time. The original 2°C test, used in this study, was deemed acceptable for the small areas and time periods of DTE10 tests. In a nationwide implementation, however, this value may not be the final QCh threshold, and the impact in terms of final confidence may be less fixed. In fact, the next major upgrade of the VDT will use an interquartile range of several stations as well as varying degrees of confidence to allow for acceptable variations in the temperature measured by vehicles (Drobot et al. 2011).

For this analysis, the NST was not performed on pressure data with the VDT. Vehicle-collected pressure is a station pressure, whereas the ASOS reports ingested by the VDT have pressure as reduced to mean sea level (in the Detroit area, this tends to be about a 30-hPa difference in average conditions). This issue will be addressed in the future, but for this study, the NST was performed manually using a 10-hPa threshold and the WXT520 sensor setup at the test site, which recorded station pressure. The vehicles were not reduced to mean sea level pressure because of incomplete knowledge of the elevation of the vehicles. For the DTE10 data, the nearest ASOS station most often was KDTW. Although the WXT520 was available as a closer comparison during the DTE10 testing, it was decided to retain the original ASOS-based QCh for this test for temperature because future implementation of the VDT in real time on a nationwide basis (a major goal of VDT development) would not have the benefit of such a sensor. Using the ASOS better allows the QCh analysis in this report to represent future implementation of the VDT.

After each observation is run through these three tests, it is given one final QCh flag, which is termed the combined-algorithm test (CAT). One of three confidence levels was assigned as follows: no confidence if the observation failed all three QCh tests; low confidence if the observation passed the ART but failed the NVT, NST, or both; or high confidence if the observation passed all three QCh tests. Note that the CAT does not dispose of any observations but merely flags them. This allows the quality of all observations run through the VDT to be examined, keeping in mind their assigned confidence.

4. Analysis of quality-checking tests

For the QCh currently implemented in the VDT, only temperature and pressure are evaluated beyond their sensor range, and only these two variables are analyzed in this section. ART pass rates for other vehicle observations are found in Table 1. All observations that did not pass the ART were missing values as reported in the OBE logs. For brake boost and wiper status specifically, these variables were not collected from the three Ford Edge vehicles, which accounted for roughly one-third of total observations.

Figure 2 gives the percentage of observations passing each QCh test for air temperature and pressure. All observations (100%) passed the ART. With the current setup of the CAT, this resulted in 100% of observations having at least a low confidence. Nearly all temperature and pressure observations also passed the NVT (99.8% each). The NST was more discriminating and had the largest effect on which observations were given a high confidence. Most temperature observations passed the NST (91.7%) while only about one-third of pressure observations passed (31.5%). Pressure was reported by the vehicles in a coarse 10-hPa resolution, which contributed to the low pass rates. As further elaborated in section 5, however, many pressure observations were farther from the WXT520 observations than a 10-hPa resolution would fully account for. There is not enough controlled information in this dataset to separate out whether the poor pass rates are related to collecting pressure on a mobile platform, are a result of poor sensor quality, or are related to the procedure used to derive atmospheric pressure from the vehicles’ manifold absolute pressure and mass airflow systems, or stem from some combination of these factors. It is also important to point out the meteorological implications of the coarse resolution, namely that the data are difficult if not impossible to use in a meteorological context. With a 10-hPa resolution, it would be unlikely that the vehicles could be used to sense any pressure change within an entire day, limiting pressure’s utility for diagnosing weather conditions along the roadway (Drobot et al. 2009a). For this reason, a detailed analysis of the quality of pressure observations will not be presented although, as mentioned, a few key quality issues were discovered beyond the resolution and are included in this paper along with overall statistics.

Fig. 2.
Fig. 2.

Percentage of air temperature (dark gray) and atmospheric pressure (light gray) observations passing each of the three QCh tests as well as the percentage of observations assigned a minimum of low and high confidence.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

The percentage of temperature and pressure observations given a high confidence was 91.5% and 31.5%, respectively. Because 100% of observations passed the ART and were given at least low confidence, the only stratifications examined were for the NVT, NST, and CAT high confidence. The NVT pass rates remained high (>99%) for all stratifications, which caused the NST and CAT high confidence to mirror each other, and therefore only the results for the NST will be discussed here.

To determine whether pass rates were affected by meteorological conditions, the results of the QCh tests were stratified by temperature, wind direction and speed, and precipitation condition. Temperature and wind observations were supplied by the WXT520 sensor. Precipitation condition was inferred from vehicle wiper status. Wiper status was used because the WXT520 does not measure frozen precipitation, and the ASOS station was deemed too far away to be representative of the DTE area precipitation.

Pass rates did not seem to be affected by ambient temperature or wind direction for any of the tests. In Fig. 3, pass rates increased slightly with higher wind speeds. The increase was about 10% for temperature. It is possible that this is an effect of better mixing with higher wind speeds around the WXT520 sensor than of the vehicles themselves, because vehicle speed did not affect the pass rates. For precipitation, overall, there did not appear to be much of a relationship with pass rates (Fig. 4), but the pass rate for pressure did drop dramatically (from 43.5% to 6.0%) for the “steady” category. This could be due to sample-size issues—there were only 1326 observations for the steady category as compared with 118 067 and 39 618 for the “off” and “intermittent” categories respectively, or it could be an effect of precipitation on vehicle atmospheric pressure observations. Overall, it appears that meteorological conditions do not have much effect on pass rates, although faster wind speeds improved pass rates slightly, and it is possible pressure pass rates are lower with heavier precipitation. The latter effect could impact the usefulness of vehicle pressure observations in precipitating environments (given an improved reporting resolution) if it is consistent among future datasets. As work continues, possible causes for these findings will be examined, particularly regarding the quality of atmospheric pressure data in heavy precipitation.

Fig. 3.
Fig. 3.

Percentage of air temperature (dark gray) and atmospheric pressure (light gray) observations passing the NST stratified by ambient wind speed as measured by the WXT520.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

Fig. 4.
Fig. 4.

As in Fig. 3, but stratified by precipitation condition, which is inferred from windshield-wiper status.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

Nonmeteorological factors were also considered and included the following: day, time of day, vehicle speed, and vehicle. Pass rates differed by day, but there was no exact pattern. For time of day, there was a slight downward trend in pass rates from morning to evening, but this relationship was fairly weak and inconsistent. There was also very little difference in pass rates for differing vehicle speeds. Overall, these additional nonmeteorological factors appeared to have little impact on QCh pass rates.

The largest differences in pass rates occurred when the data were stratified by vehicle (Fig. 5). Temperature pass rates were mostly comparable among the vehicles, although the Fords had slightly lower mean NST pass rates than the Jeeps (86.0% as compared with 94.8%). Differences in temperature readings between vehicles can vary for a variety of reasons, including the accuracy of the sensors themselves or their placement on the vehicles (Stern et al. 2007). In this DTE10 case, the sensors for both Fords and Jeeps were placed in the front grill. The make and model of the sensors could not be determined, however. Manufacturers have an approved component list and buy the lowest-priced sensor at the moment, which can result in the same make and model of vehicle having slightly different components depending on when it was assembled during the manufacturing year. Other factors include airflow mixing rate or software issues on the vehicle. Pressure pass rates varied widely within the same make/model.

Fig. 5.
Fig. 5.

As in Fig. 3, but stratified by vehicle. Ford Edges are “e” vehicles, and Jeep Grand Cherokees are “p” vehicles.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

5. Analysis of data quality

Both low- and high-confidence air temperature and atmospheric pressure observations were analyzed to examine the accuracy and bias of the quality-checked vehicle observations from DTE10, using the WXT520 as validation. Statistics used were bias, mean absolute error (MAE), and correlation. The bias indicates how far over or under an observation is in relation to another like observation, MAE is used to show how close the measurement of a variable is to its comparison observation, and correlation quantifies the linear relationship between the two variables.

For a first step, the Student’s t test for paired observations was performed. The p value was less than 0.01 for both high-confidence temperature and pressure, meaning the vehicle and WXT520 datasets are statistically significantly different. The actual difference in the means was small (−0.21°C and −4.33 hPa), however, and the DTE10 variance was within 2°C2 and 2 hPa2 of the WXT520 variance. In addition, the difference in medians was relatively small (−1.2°C and −7.5 hPa), particularly in the context of the vehicles’ reporting resolutions of 1°C and 10 hPa. This leads to the conclusion that this difference, although statistically significant, is not particularly physically meaningful and is likely due to the large sample size.

The high-confidence quality-checked vehicle observations showed favorable comparison to the WXT520 (Fig. 6; Table 3). This should be expected for pressure, because the WXT520 was used for the NST, but for temperature the NST was run using the KDTW ASOS, and the overwhelming majority of observations were given high confidence. In addition, low-confidence (all observations in DTE10) temperature statistics also showed a relatively close relationship between vehicle and ground sensor observations. As with the QCh analysis, post-QCh observations of temperature and pressure were stratified to determine any possible effects of meteorological and nonmeteorological factors on the results. These factors were the same as those used for the QCh results in section 4. This section primarily discusses the high-confidence air temperature results, although important information related to the low-confidence statistics and pressure is presented as well.

Fig. 6.
Fig. 6.

Overall statistics for low-confidence (dark gray) and high-confidence (light gray) (left) air temperature and (right) atmospheric pressure observations in comparison with the WXT520.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

Table 3.

Overall statistics for vehicle-measured temperature and pressure compared with the WXT520 observations.

Table 3.

The air temperature showed very little dependence on meteorological factors, either by having consistent statistics across all categories of the stratifications or with no trend in the variation between stratifications. There were a few instances of slight trending for some of the statistics, however. When stratified by ambient air temperature, vehicle air temperature observations showed a bias that tended to be slightly positive below 0°C and slightly negative above 0°C (Fig. 7). MAE decreased slightly with increasing temperature, and correlation increased with temperature. For wind direction, temperature bias tended to be more positive with a southerly wind and more negative with a northerly wind (Fig. 8). The bias pattern could possibly be due to the less-than-ideal location of the WXT520 in an urban environment, with surrounding trees and buildings affecting temperature readings, depending on the wind direction. When stratified by wind speed, temperature bias was slightly negative for lower wind speeds and slightly positive for higher speeds, but these differences were less than 0.5°C. Using wiper status to infer precipitation condition, temperature MAE improved with increasing wiper rate, and there was a slight downward relationship in pressure correlation. In all cases, the variations and relationships in the statistics do not appear to be the result of a significant meteorological impact on the quality of the mobile data.

Fig. 7.
Fig. 7.

Vehicle temperature bias for low confidence (dark gray) and high confidence (light gray) in comparison with WXT520 temperature, stratified by ambient air temperature.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

Fig. 8.
Fig. 8.

As in Fig. 7, but stratified by ambient wind direction.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

When considering nonmeteorological factors, there again appeared to be very little dependence by the mobile observations apart from the stratification by vehicle. There were only two instances of any relationship seen when considering date and time of day. For date, temperature bias moved from positive to negative values as the days progressed (Fig. 9). A similar relationship was seen when moving from cooler to warmer temperatures (Fig. 7). This bias pattern in date may be due to the progression from winter to spring seasons. By time of day, temperature bias started much more negative and moved toward positive values through the day. The actual difference between the biases at various hours was less than 1°C, however, which is the resolution of the sensor. Vehicle speed appeared to have little if any impact on the statistics. When stratifying the statistics by vehicle, though (Fig. 10 for bias), large differences among each are seen, particularly with bias. The temperature bias showed variation not only between make/model of vehicle (Ford Edge vs Jeep Grand Cherokee) but also between vehicles of the same make/model. The MAE showed less variation, with the Fords having a slightly higher MAE than the Jeeps. Correlation was high among all vehicles. For pressure, the Fords had very high negative biases (Fig. 10) and MAE values in comparison with the Jeeps. In particular, e2 and e3 had biases of −39 and −62 hPa, respectively, for low confidence, clearly showing why no pressure observations from these vehicles passed with a high confidence. Statistics for e4 were more in line with the Jeeps, although that vehicle had a more negative bias, higher MAE, and lower correlation than the Jeeps. The statistics varied among Jeeps as well, but not significantly. The differences in pressure statistics were mostly seen with low-confidence observations, whereas high-confidence pressure observation statistics varied little.

Fig. 9.
Fig. 9.

As in Fig. 7, but stratified by date.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

Fig. 10.
Fig. 10.

Vehicle (top) temperature and (bottom) pressure low-confidence (dark gray) and high-confidence (light gray) bias in comparison with WXT520 temperature, stratified by vehicle. Ford Edges are “e” vehicles, and Jeep Grand Cherokees are “p” vehicles.

Citation: Journal of Applied Meteorology and Climatology 51, 4; 10.1175/JAMC-D-11-0126.1

6. Summary and conclusions

Overall, the vehicle temperature measurements showed reasonable agreement with the WXT520. The low- and high-confidence results for temperature did not differ significantly from each other, indicating that the QCh process did not have a large effect on this conclusion. The temperature statistics were also not significantly affected by different categories of meteorological and nonmeteorological factors. There were some differences and trends, but the magnitudes of these were small (within about 1°C) and would likely have little impact in an operational environment. These results agree with Chapman et al. (2010), who found that temperature observations from vehicles in the 2009 DTE Experiment (DTE09) were in agreement with the closest ASOS station (KDTW).

The pressure comparisons were not as favorable. Low- and high-confidence results differed greatly, demonstrating that the QCh process would have a large impact on this dataset. As with temperature, there were some impacts of meteorological and nonmeteorological factors on the statistics, but none had a particularly obvious effect apart from vehicle. This again was in agreement with Chapman et al. (2010), although by also examining the data that were not quality checked, the study in this paper showed a much larger discrepancy between the vehicle-observed atmospheric pressure and that of the nearest weather station. The largest differences in pressure lay between make/models of vehicle, emphasizing the need to test with as many makes and models as possible. Part of the issue with pressure measurements could be due to the coarse 10-hPa resolution. Until a more practical resolution is achieved, the extent to which this affects the statistics remains unclear; despite the quality, the pressure observations are thus of little use from a meteorological perspective.

These results support the feasibility of collecting air temperature observations from a mobile platform—in particular, from ordinary passenger vehicles. It is important to keep in mind that part of the usefulness of such data lies in their ability to sample variability in high resolution (both time and space). At first glance, the high-quality match with the WXT520 measurements may seem to indicate that existing platforms are adequate. The WXT520, however, was purposefully sited to be nearer the routes driven by the vehicles than any existing ASOS station (no more than 20 km away, although usually much closer) and the statistics were calculated over all routes and times, which tends to reduce the impact of small-scale fluctuations that would be sampled by the vehicles. Given the quality of the air temperature data collected during this experiment, these data should prove useful for a variety of applications, both related specifically to road weather and outside this scope, such as model data assimilation.

Atmospheric pressure measurements from these DTE10 vehicles are not useful in their current form, and more work must be done to improve their quality. A first step in this work should be a finer reporting resolution. Once this is achieved, a similar analysis to the one presented here can determine whether mobile pressure measurements are feasible with current collection methods.

Analysis of datasets from other sources will be important to the continued study of the quality of mobile observations. Datasets from different vehicle types and locations may include a variety of instruments, placements, and meteorological conditions in which the data were collected, all of which could affect the quality and usability. Specific data collection efforts currently undertaken that could prove useful in this regard include the National Weather Service’s Mobile Platform Environmental Data (MoPED) System (Bell et al. 2011), the second phase of the Data Use Analysis and Processing (DUAP 2) project (Dion and Robinson 2010), and the second Strategic Highway Research Program (SHRP2; Transportation Research Board 2009).

Acknowledgments

This research was funded by the Federal Highway Administration through contract DTFH61-08-D-00012. Development/support of the VDT included contributions from Elena Schuler of NCAR.

REFERENCES

  • Andrews, S., , and M. Cops, 2009: Final report: Vehicle infrastructure integration proof of concept technical description–Vehicle. FHWA Tech. Rep. FHWA-JPO-09-017, 96 pp.

  • AutoTap, cited 2011: The OBD II home page. [Available online at http://www.obdii.com.]

  • Bell, B., , P. O. G. Heppner, , A. Orrego, , and D. Helms, 2011: The Mobile Platform Environmental Data (MoPED) System: Providing mobile environmental data to the National Mesonet. Preprints, 27th Conf. on Interactive Systems (IIPS) for Meteorology, Oceanography, and Hydrology, Seattle, WA, Amer. Meteor. Soc., 2A.4. [Available online at http://ams.confex.com/ams/91Annual/webprogram/Paper184255.html.]

  • Bouilloud, L., and Coauthors, 2009: Road surface condition forecasting in France. J. Appl. Meteor. Climatol., 48, 25132527.

  • Chapman, M., , S. Drobot, , T. Jensen, , C. Johansen, , W. Mahoney, III, , P. Pisano, , and B. McKeever, 2010: Using vehicle probe data to diagnose road weather conditions—Results from the Detroit IntelliDrive(SM) Field Study. Transp. Res. Rec., 2169, 116127.

    • Search Google Scholar
    • Export Citation
  • Crevier, L.-P., , and Y. Delage, 2001: METRo: A new model for road-condition forecasting in Canada. J. Appl. Meteor., 40, 20262037.

  • Dion, F., , and R. Robinson, 2010: VII Data Use Analysis and Processing (DUAP) final project report (phase II). UMTRI Tech. Rep. UMTRI-2010-28, 267 pp. [Available online at http://deepblue.lib.umich.edu/bitstream/2027.42/78569/1/102726.pdf.]

  • Drobot, S. D., , A. Anderson, , M. C. Chapman, , and C. Johansen, 2009a: Vehicle standards. NCAR Tech. Rep., 15 pp.

  • Drobot, S. D., and Coauthors, 2009b: IntelliDrive (SM) road weather research & development—The Vehicle Data Translator. Proc. Intelligent Transportation Society of America Annual Conf., National Harbor, MD, ITSA, 13 pp. [Available online at http://www.ral.ucar.edu/projects/intellidrive/publications/ITSA2009_Drobot_paper.pdf.]

  • Drobot, S. D., , M. Chapman, , P. A. Pisano, , and B. B. McKeever, 2010: Using vehicles as mobile weather platforms. Data and Mobility: Transforming Information into Intelligent Traffic and Transportation Services, Springer, 203–214.

  • Drobot, S. D., , M. Chapman, , B. Lambi, , G. Wiener, , and A. Anderson, 2011: The Vehicle Data Translator V3.0 System Description. FHWA Tech. Rep. FHWA-JPO-11-127, 45 pp.

  • Kandarpa, R., and Coauthors, 2009: Final report: Vehicle infrastructure integration proof-of-concept results and findings—Infrastructure. FHWA Tech. Rep. FHWA-JPO-09-057, 194 pp.

  • Mahoney, B., , S. Drobot, , P. Pisano, , B. McKeever, , and J. O’Sullivan, 2010: Vehicles as mobile weather observation systems. Bull. Amer. Meteor. Soc., 91, 11791182.

    • Search Google Scholar
    • Export Citation
  • Manfredi, J., , T. Walters, , G. Wilke, , L. Osborne, , R. Hart, , T. Incrocci, , and T. Schmitt, 2005: Road Weather Information System Environmental Sensor Station siting guidelines. FHWA Tech. Rep. FHWA-HOP-05-026, 46 pp.

  • National Weather Service, cited 2010: Weather fatalities. [Available online at http://www.weather.gov/os/hazstats.shtml.]

  • Noblis, cited 2010: Eleven-year averages from 1995 to 2005 analyzed by Noblis, based on NHTSA data. [Available online at http://ops.fhwa.dot.gov/weather/q1_roadimpact.htm.]

  • Pisano, P. A., , A. D. Stern, , and W. P. Mahoney III, 2005: The U.S. Federal Highway Administration Winter Road Maintenance Decision Support System (MDSS) Project: Overview and results. Preprints, 21st Int. Conf. on Interactive Information Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology, San Diego, CA, Amer. Meteor. Soc., 6.5. [Available online at http://ams.confex.com/ams/pdfpapers/83959.pdf.]

  • Pisano, P. A., , J. S. Pol, , A. D. Stern, , B. C. Boyce, , and J. K. Garrett, 2007: Evolution of the U.S. Department of Transportation Clarus initiative: Project status and future plans. Preprints, 23rd Conf. on Interactive Systems (IIPS) for Meteorology, Oceanography, and Hydrology, San Antonio, TX, Amer. Meteor. Soc., 4A.5. [Available online at http://ams.confex.com/ams/pdfpapers/119018.pdf.]

  • SAE International, 2009: On-Board Diagnostics for Light and Medium Duty Vehicles Standards Manual–2010 Edition. SAE International, 969 pp.

  • Sass, B. H., 1992: A numerical model for prediction of road temperature and ice. J. Appl. Meteor., 31, 14991506.

  • Stern, A. D., , V. P. Shah, , K. J. Biesecker, , C. Yeung, , P. A. Pisano, , and J. S. Pol, 2007: Vehicles as mobile sensing platforms for meteorological observations: A first look. Preprints, 23rd Conf. on Interactive Systems (IIPS) for Meteorology, Oceanography, and Hydrology, San Antonia, TX, Amer. Meteor. Soc., 4A.6. [Available online at http://ams.confex.com/ams/pdfpapers/118986.pdf.]

  • Transportation Research Board, 2009: Implementing the results of the Second Strategic Highway Research Program: Saving lives, reducing congestion, improving quality of life. Transportation Research Board Special Rep. 296, 169 pp.

  • USDOT, cited 2011: Intelligent transportation systems. Research and Innovative Technology Administration, Department of Transportation. [Available online at http://www.its.dot.gov.]

  • Vaisala, cited 2010a: Vaisala Surface Patrol HD Pavement Temperature and Humidity Sensor Series DSP200. [Available online at http://www.vaisala.com/en/products/surfacesensors/Pages/DSP211.aspx.]

  • Vaisala, cited 2010b: Vaisala Weather Transmitter WXT520. [Available online at http://www.vaisala.com/en/products/multiweathersensors/Pages/WXT520.aspx.]

Save