Atmospheric Data Visualization in Mixed Reality

Nihanth W. Cherukuru Arizona State University, Tempe, Arizona

Search for other papers by Nihanth W. Cherukuru in
Current site
Google Scholar
PubMed
Close
,
Ronald Calhoun Arizona State University, Tempe, Arizona

Search for other papers by Ronald Calhoun in
Current site
Google Scholar
PubMed
Close
,
Tim Scheitlin National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Tim Scheitlin in
Current site
Google Scholar
PubMed
Close
,
Matt Rehme National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Matt Rehme in
Current site
Google Scholar
PubMed
Close
, and
Raghu Raj Prasanna Kumar National Center for Atmospheric Research, Boulder, Colorado

Search for other papers by Raghu Raj Prasanna Kumar in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Mixed reality taps into intuitive human perception by merging computer-generated views of digital objects (or flow fields) with natural views. Digital objects can be positioned in 3D space and can mimic real objects in the sense that walking around the object produces smoothly changing views toward the other side. Only recently have advances in gaming graphics advanced to the point that views of moving 3D digital objects can be calculated in real time and displayed together with digital video streams. Auxiliary information can be positioned and timed to give the viewer a deeper understanding of a scene; for example, a pilot landing an aircraft might “see” zones of shear or decaying vortices from previous heavy aircraft. A rotating digital globe might be displayed on a table top to demonstrate the evolution of El Niño. In this article, the authors explore a novel mixed reality data visualization application for atmospheric science data, present the methodology using game development platforms, and demonstrate a few applications to help users quickly and intuitively understand evolving atmospheric phenomena.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Nihanth W. Cherukuru, nihanth.cherukuru@asu.edu

A supplement to this article is available online (10.1175/BAMS-D-15-00259.2)

Abstract

Mixed reality taps into intuitive human perception by merging computer-generated views of digital objects (or flow fields) with natural views. Digital objects can be positioned in 3D space and can mimic real objects in the sense that walking around the object produces smoothly changing views toward the other side. Only recently have advances in gaming graphics advanced to the point that views of moving 3D digital objects can be calculated in real time and displayed together with digital video streams. Auxiliary information can be positioned and timed to give the viewer a deeper understanding of a scene; for example, a pilot landing an aircraft might “see” zones of shear or decaying vortices from previous heavy aircraft. A rotating digital globe might be displayed on a table top to demonstrate the evolution of El Niño. In this article, the authors explore a novel mixed reality data visualization application for atmospheric science data, present the methodology using game development platforms, and demonstrate a few applications to help users quickly and intuitively understand evolving atmospheric phenomena.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Nihanth W. Cherukuru, nihanth.cherukuru@asu.edu

A supplement to this article is available online (10.1175/BAMS-D-15-00259.2)

Atmospheric science has seen a tremendous advancement in the tools and methods used to study the atmosphere over the years. For instance, remote sensors like lidars can make unseen structure in objects or flow fields visible with higher spatial and temporal resolution. However, it is often difficult to connect static, two-dimensional graphics to a natural visual view of either an object or a three-dimensional flow field, necessitating an interactive 3D visualization environment that is simple and intuitive to the user. Visualization in mixed reality provides one such environment that taps into intuitive human perception by merging computer-generated views of digital objects (or flow fields) with natural views.

According to the well-cited paper by Milgram and Kishino (1994), mixed reality (MR) is defined as an environment in which real-world objects and virtual objects are presented together within a single handheld display (e.g., tablet computers, smartphones) or a head-mounted display (HMD). The HMDs are either smart glass-based goggles capable of rendering computer-generated content on transparent glass (e.g., Microsoft’s HoloLens), or video based—live video of the outside world along with the computer-generated data are presented on a small display held in front of the user’s eyes (e.g., Oculus rift with video cameras).

MOTIVATION.

Visualization techniques have evolved over time, and MR data visualization is emerging as the new paradigm for interacting with digital content. Although the concept of MR/AR (AR, or augmented reality, is considered a subset of MR and is often used interchangeably with MR) has been known for many years, the ubiquitous availability of smartphones and tablet computers has led to an extraordinary growth of its applications in the mainstream over recent years. In addition, HMDs that were popular in the past primarily in the aviation industry, military, and specialized research organizations have now become affordable and accessible to the general public given their increasing popularity in the gaming industry. On the software side, modern game development platforms (software used to develop computer games) have made the application development process simpler because of their efficient game engines and compatibility with multiple computing devices.

The primary purpose of this article is to demonstrate a novel application of MR in conjunction with atmospheric science data and present a brief methodology using modern game development platforms.

RELEVANT WORK.

A good starting point to learn about MR would be the classic review article by Azuma (1997) about augmented reality. Given the disparate applications of MR technologies, a detailed review of all the contributions in this field is beyond the scope of this article. Therefore, only studies related to atmospheric data visualization will be presented here. Aragon and Long (2005) first conducted a flight-simulator-based usability study of an airflow hazard detection system for rotorcraft pilots using AR. The study demonstrated a “dramatic improvement” in the performance of the helicopter pilots who used AR visualization under stressful operational conditions. Nurminen et al. (2011) presented another application of MR as one of the features of the HYDROSYS system—an environmental data analysis and monitoring platform in which simulation results from integrated sensor data were visualized in MR using a specialized portable handheld computer. In a similar study, Heuveline et al. (2013) demonstrated a mobile AR system to visualize computational fluid dynamics simulation of wind flow around buildings in an urban environment.

The data visualization application presented in this article differs from the previous works in three aspects: i) type of data being visualized, ii) target devices, and iii) tools used in the application development. Compared to simulation data used in previous studies, the current application uses multidimensional data from atmospheric sensors with close to real-time application capability. Moreover, as opposed to specialized equipment and development tools used for visualization in previous studies, the current work leverages on the ubiquitous availability of smartphones and game development tools, making the technology accessible to a larger user base, including independent researchers, small research groups, and the general audience.

GENERAL WORKING SCENARIO.

Consider a typical field deployment with multiple sensors (e.g., instruments on a meteorological mast, radiosonde, Doppler lidars, weather stations, etc.) measuring different atmospheric variables (see Fig. 1). The network between the sensors and the visualization devices is based on a client-server model [i.e., the data being visualized are streamed from the sensors and stored at a server, and all the devices used for visualization (e.g., smartphones, HMDs, etc.) obtain the data from this server as clients]. In addition to storing the raw data from the sensors, the server also processes the raw data to produce appropriate files for visualization. On the visualization device, the user can either choose to automatically stream the data to be visualized as it becomes available (real time) or manually step through the data in a retrospective mode.

Fig. 1.
Fig. 1.

An illustration of a typical working scenario with the data acquisition process shown with yellow dotted lines and the visualization process shown with red dotted lines.

Citation: Bulletin of the American Meteorological Society 98, 8; 10.1175/BAMS-D-15-00259.1

The application on the viewing device (client) is designed to have two modes of operation: a) the “onsite mode,” where the data from the sensors are presented as an overlay at the sensor’s physical location when viewed through the mobile device or HMD, and b) the “tabletop mode,” where the data being visualized (along with a scaled-down version of the 3D terrain) are overlaid on a surface when viewed through the mobile device or HMD.

IMPLEMENTATION ON THE VIEWING DEVICE.

Computer games can be described as applications that enable human users to move and interact with virtual objects, driven by specific rules and mechanics, through a user interface. For instance, in the video game Tetris, the user rotates and moves the tiles/blocks (virtual objects) using the computer keyboard (interaction) to match the patterns as they drop to the floor (mechanics). A game development platform is a software system that provides a working environment to design these games, attach scripts (computer codes that specify the mechanics), and build the project (i.e., create an executable package through which the user can launch and play the game). An added advantage of using game development platforms is their multiplatform support [i.e., applications developed in the game development platform can be built for different devices (e.g., smartphones, tablet computers, and many of the popular HMDs)].

A data visualization application was developed using Unity SDK (Unity Technologies 2016). Following Unity’s terminology, all virtual objects will henceforth be referred to as “GameObjects” (GOs, which are basically objects inside a game). There are three main GOs in this application (similar to the blocks in Tetris but in 3D): Sensor Data-GO, Camera-GO, and Background-GO (see Fig. 2a). The Sensor Data-GO(s) corresponds to the data being visualized. In the example shown in Fig. 2, it is a 2D vector field that is fixed at a known location in the virtual world. The locations of the data are determined based on prior knowledge of the instrument’s location. The Camera-GO functions as the eyes of the user into the virtual world [i.e., while running the application, the device’s screen displays what the Camera-GO sees at the moment (see Fig. 2b)]. Last, the Background-GO is a 2D plane that serves as a backdrop to the Camera-GO’s output onto which live video from the device’s physical camera is displayed (imagine a big screen that follows the Camera-GO as its backdrop wherever it currently points). Thus, the user can see the video streamed from the device’s physical camera while simultaneously seeing the sensor data, whenever the Sensor Data-GO comes within the field of view of the Camera-GO (Fig. 2). The user interaction with this virtual world (and hence the MR experience) is controlled through the Camera-GO. The Camera-GO has six degrees of freedom [i.e., it can move along three directions in space (X, Y, Z) and rotate around each of these axes (roll, pitch, yaw)]. By controlling these six parameters of the Camera-GO, the user can move and look around in the virtual world. The two modes of operation (i.e., the “onsite mode” and “tabletop mode”) differ primarily in input method of these six variables.

Fig. 2.
Fig. 2.

(a) An overview of the application setup in Unity, showing different GameObjects (GOs) used in the current project. The lines connecting the Camera-GO with the Background-GO denote the edges of the view frustum (edges of the camera’s field of view) of the Camera-GO. (b) The output of the Camera-GO (the user view while running the application).

Citation: Bulletin of the American Meteorological Society 98, 8; 10.1175/BAMS-D-15-00259.1

Onsite mode.

Smartphones, tablet computers, and most of the wearable devices come equipped with GPS, magnetometer, and gyroscope units. The onsite mode makes use of the data from these sensors to determine the six variables of the Camera-GO—the GPS can be used to obtain the (X, Y, Z) position and the gyroscope (with the magnetometer) can provide the (roll, pitch, yaw) information. Thus, the Camera-GO aligns its view of the virtual world with the real world by mimicking the location and orientation of the viewing device. One of the major challenges of the onsite mode is the exact alignment of the atmospheric sensor data with real-world objects relying solely on mobile sensors [GPS and inertial measurement unit (IMU)]. The alignment process is limited by the accuracy of the location/orientation sensors (i.e., the current GPS accuracy is about ±10 m), requiring the user to make fine corrections in the location while starting the application (after which the alignment holds without any user intervention). One way to address this issue in the future is by combining the current approach with a computer-vision-based approach (Klein and Murray 2007).

Tabletop mode.

The tabletop mode uses a marker-based system to determine the position and orientation of the Camera-GO. Consider a square sheet of paper with some image (the marker) placed on a table (Fig. 3). When this marker is viewed from different locations, the image appears distorted due to the perspective effect (Figs. 3a–d). By comparing this distorted image with the original image of the marker, the location and orientation of the camera relative to the location of the marker can be uniquely determined. This marker detection and tracking process is a well-studied problem in the field of computer vision, and a number of frameworks are currently available. In the present study, this was implemented using the Vuforia SDK (PTK Inc. 2016)—a mobile SDK to create computer-vision-based AR systems. By centering the virtual objects at the location of the marker, the user can visualize these objects as an overlay on the live camera feed of the device. The virtual objects mentioned above could be static meshes (e.g., scaled version of the terrain) or interactive sensor data (e.g., wind field, temperature, etc.) or a combination of both. The overall implementation on the device is given in Fig. 4.

Fig. 3.
Fig. 3.

(left) Working methodology of the tabletop mode demonstrating how the image appears distorted based on the viewing angle of the device camera. (a)–(d) The camera view at the corresponding positions at left. A marker detection framework (Vuforia SDK in this study) is used to calculate position and orientation of the camera relative to the marker.

Citation: Bulletin of the American Meteorological Society 98, 8; 10.1175/BAMS-D-15-00259.1

Fig. 4.
Fig. 4.

Illustration of the visualization process on the client side (i.e., in the mobile/tablet/HMD) running a game engine.

Citation: Bulletin of the American Meteorological Society 98, 8; 10.1175/BAMS-D-15-00259.1

TEST CASE.

For the purpose of demonstration, three different uses of the MR visualization were explored. The supplemental material accompanying this manuscript shows the screen recording of the tablet computer running the MR applications. First, the visualization method was tested with lidar data from the METCRAX II field experiment. Figures 5a and 5b show the screenshots of the application running on a tablet computer in onsite mode and tabletop mode, respectively. The data being visualized are a 2D cross section of the wind flow inside the Meteor Crater in Arizona during a downslope windstorm-type flow (DWF) event (see sidebar on “Lidars in METCRAX II” for a brief description of the dataset). The DWF event is an intermittent, short-lived wind flow in which the ambient flow over the crater descends inside the crater and rebounds along the crater sidewall, forming a rotor and a lee wave. The colors represent wind speed with the vectors showing wind direction. The dataset shown in the demo video (see online supplemental material) spans a 2-h time period with a temporal resolution of 1 min, and is visualized in retrospective mode (i.e., recorded data, although it has near-real-time capabilities limited by the bandwidth of the Internet connection and the data acquisition time of the sensors). The example shown here with a tablet computer is meant to be a proof of concept, and such a visualization will prove valuable in situations where decision-makers need to quickly identify and determine the physical location of an atmospheric event. For instance, a pilot landing an aircraft could use it to see zones of shear or decaying turbulent regions from previous aircraft in his plain sight through his head-mounted display.

LIDARS IN METCRAX II

The field data used in this project were acquired from Doppler wind lidars in the Second Meteor Crater Experiment (METCRAX II) field project—a monthlong field deployment to study downslope windstorm-type flows (DWF) occurring at the Meteor Crater in Arizona (Lehner et al. 2016). Among other instruments, multiple Doppler lidars were used to observe the nocturnal katabatic flow upstream and its response inside the crater. Two Doppler wind lidars, one at the rim and the other at the base of the crater, were positioned along the south-southwest cross section to perform repeated coplanar range–height indicator scans (i.e., keeping the azimuth constant and scanning in the vertical). The synchronous measurements were combined to resolve the winds along a 2D slice through the crater (Cherukuru et al. 2015). Figure SB1 illustrates the instruments’ location in the crater along with a 2D cross section of one such short-lived DWF event—showing the descending flow into the crater and its rebound along the sidewall creating a rotor and a lee wave inside the crater. Findings from the METCRAX II field project will help us further our understanding and improve computer models used to forecast downslope wind storms.

Fig. SB1.
Fig. SB1.

The location of LIDARs along with the retrieved vector wind field during a downslope windstorm-type flow event at the Barringer Meteor Crater in Arizona, during the METCRAX II field experiment. The 2D Cartesian grid in the scan overlap region represents the retrieval domain for the coplanar dual-Doppler lidar algorithm. The colors represent wind magnitude with the arrows showing the wind direction.

Citation: Bulletin of the American Meteorological Society 98, 8; 10.1175/BAMS-D-15-00259.1

Fig. 5.
Fig. 5.

Screenshots from the tablet computer running the data visualization application. (a) Onsite mode (the scan was rotated about the vertical axis to get a better viewing angle). (b) Tabletop mode. (c) Smartphone running the Meteo-AR application with El Niño science page in the field of view. (d) Markersheet/science page for the El Niño dataset. The white dots along the border of the image help the application identify the dataset corresponding to the page.

Citation: Bulletin of the American Meteorological Society 98, 8; 10.1175/BAMS-D-15-00259.1

In an educational setting, AR/MR technologies make the atmospheric data tangible by allowing students to interact with the data as if they were physical objects, providing an intuitive and richer learning experience. Figure 5c demonstrates one such application. Similar to the tabletop mode, this application uses a marker-based system to determine the location of the virtual objects. The user is provided with science pages that contain information pertaining to the dataset along with an image, which serves as the marker. Figure 5d shows the science sheet corresponding to the dataset in Fig. 5c. This particular dataset depicts NOAA’s 1/4 daily optimal interpolation sea surface temperature anomaly during the years 2015–16, showing the evolution of the recent El Niño event. When viewed through a mobile device running the application (Meteo-AR), the 3D dataset appears on the page, allowing the user to interact and explore the dataset in MR. To view a different dataset, the user simply swaps the science sheet. The placement of the white dots along the border of the image is unique to each dataset (see Fig. 5d), and they help the application determine which dataset to render.

FINAL REMARKS.

Given the increased interest and familiarity with mobile technology among the millennial generation, the proposed MR technology could be suitable in weather education and public outreach activities. In addition, this engaging technology could also help in evoking student interest in Science, Technology, Engineering, and Mathematics (STEM) fields. Although, this article is not intended to be a tutorial on a specific game development platform, it is our hope that this study will serve as a springboard for fellow researchers in atmospheric science interested in incorporating MR technology in their own work.

ACKNOWLEDGMENTS

This research was supported by NSF Grant AGS-1160737 and the NCAR 2016 SIParCS internship program. The authors would like to gratefully acknowledge the support of ASU LightWorks and the Navy Neptune program. The authors would also like to thank Richard Loft for his valuable inputs and suggestions during the Meteo-AR application development.

FOR FURTHER READING

  • Aragon, C. R., and K. R. Long, 2005: Airflow hazard visualization for helicopter pilots: Flight simulation study results. Annual Forum Proc.—American Helicopter Society, 61, 117.

    • Search Google Scholar
    • Export Citation
  • Azuma, R. T., 1997: A survey of augmented reality. Presence, 6, 355385.

  • Cherukuru, N., R. Calhoun, M. Lehner, S. W. Hoch, and C. Whiteman, 2015: Instrument configuration for dual-doppler lidar coplanar scans: METCRAX II. J. Appl. Remote Sens., 9, doi:10.1117/1.JRS.9.096090.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Heuveline, V., S. Ritterbusch, and S. Ronnas, 2013: Augmented reality for urban simulation visualization. Int. J. Adv. Syst. Meas., 6, 2639.

    • Search Google Scholar
    • Export Citation
  • Klein, G., and D. Murray, 2007: Parallel tracking and mapping for small AR workspaces. Mixed and Augmented Reality, ISMAR 2007. 6th IEEE and ACM International Symp., 225234.

    • Search Google Scholar
    • Export Citation
  • Lehner, M., and Coauthors, 2016: The METCRAX II field experiment: A study of downslope windstorm-type flows in Arizona’s Meteor Crater. Bull. Amer. Meteor. Soc., 97, 217235, doi:10.1175/BAMS-D-14-00238.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Milgram, P., and F. Kishino, 1994: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst., 77, 13211329.

  • Nurminen, A., E. Kruijff, and E. Veas, 2011: HYDROSYS–A mixed reality platform for on-site visualization of environmental data. Web Wireless Geogr. Inf. Syst., 6574, 159175, doi:10.1007/978-3-642-19173-2_13.

    • Search Google Scholar
    • Export Citation
  • PTC Inc., 2016: Vuforia developer portal. Accessed 2 June 2016. [Available online at https://developer.vuforia.com.]

  • Unity Technologies, 2016: Unity download page. Accessed 2 June 2016. [Available online at https://unity3d.com/get-unity.]

Supplementary Materials

Save
  • Aragon, C. R., and K. R. Long, 2005: Airflow hazard visualization for helicopter pilots: Flight simulation study results. Annual Forum Proc.—American Helicopter Society, 61, 117.

    • Search Google Scholar
    • Export Citation
  • Azuma, R. T., 1997: A survey of augmented reality. Presence, 6, 355385.

  • Cherukuru, N., R. Calhoun, M. Lehner, S. W. Hoch, and C. Whiteman, 2015: Instrument configuration for dual-doppler lidar coplanar scans: METCRAX II. J. Appl. Remote Sens., 9, doi:10.1117/1.JRS.9.096090.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Heuveline, V., S. Ritterbusch, and S. Ronnas, 2013: Augmented reality for urban simulation visualization. Int. J. Adv. Syst. Meas., 6, 2639.

    • Search Google Scholar
    • Export Citation
  • Klein, G., and D. Murray, 2007: Parallel tracking and mapping for small AR workspaces. Mixed and Augmented Reality, ISMAR 2007. 6th IEEE and ACM International Symp., 225234.

    • Search Google Scholar
    • Export Citation
  • Lehner, M., and Coauthors, 2016: The METCRAX II field experiment: A study of downslope windstorm-type flows in Arizona’s Meteor Crater. Bull. Amer. Meteor. Soc., 97, 217235, doi:10.1175/BAMS-D-14-00238.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Milgram, P., and F. Kishino, 1994: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst., 77, 13211329.

  • Nurminen, A., E. Kruijff, and E. Veas, 2011: HYDROSYS–A mixed reality platform for on-site visualization of environmental data. Web Wireless Geogr. Inf. Syst., 6574, 159175, doi:10.1007/978-3-642-19173-2_13.

    • Search Google Scholar
    • Export Citation
  • PTC Inc., 2016: Vuforia developer portal. Accessed 2 June 2016. [Available online at https://developer.vuforia.com.]

  • Unity Technologies, 2016: Unity download page. Accessed 2 June 2016. [Available online at https://unity3d.com/get-unity.]

  • Fig. 1.

    An illustration of a typical working scenario with the data acquisition process shown with yellow dotted lines and the visualization process shown with red dotted lines.

  • Fig. 2.

    (a) An overview of the application setup in Unity, showing different GameObjects (GOs) used in the current project. The lines connecting the Camera-GO with the Background-GO denote the edges of the view frustum (edges of the camera’s field of view) of the Camera-GO. (b) The output of the Camera-GO (the user view while running the application).

  • Fig. 3.

    (left) Working methodology of the tabletop mode demonstrating how the image appears distorted based on the viewing angle of the device camera. (a)–(d) The camera view at the corresponding positions at left. A marker detection framework (Vuforia SDK in this study) is used to calculate position and orientation of the camera relative to the marker.

  • Fig. 4.

    Illustration of the visualization process on the client side (i.e., in the mobile/tablet/HMD) running a game engine.

  • Fig. SB1.

    The location of LIDARs along with the retrieved vector wind field during a downslope windstorm-type flow event at the Barringer Meteor Crater in Arizona, during the METCRAX II field experiment. The 2D Cartesian grid in the scan overlap region represents the retrieval domain for the coplanar dual-Doppler lidar algorithm. The colors represent wind magnitude with the arrows showing the wind direction.

  • Fig. 5.

    Screenshots from the tablet computer running the data visualization application. (a) Onsite mode (the scan was rotated about the vertical axis to get a better viewing angle). (b) Tabletop mode. (c) Smartphone running the Meteo-AR application with El Niño science page in the field of view. (d) Markersheet/science page for the El Niño dataset. The white dots along the border of the image help the application identify the dataset corresponding to the page.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 737 112 6
PDF Downloads 569 93 6