The Airborne Remote Optical Spotlight System (AROSS) family of sensors consists of airborne imaging systems that provide passive, high-dynamic range, time series image data and has been used successfully to characterize currents and bathymetry of nearshore ocean, tidal flat, and riverine environments. AROSS–multispectral polarimeter (AROSS-MSP) is a 12-camera system that extends this time series capability to simultaneous color and polarization measurements for the full linear polarization of the imaged scene in red, green, and blue, and near-infrared (RGB–NIR) wavelength bands. Color and polarimetry provide unique information for retrieving dynamic environmental parameters over a larger area (square kilometers) than is possible with typical in situ measurements. This particular field of optical remote sensing is developing rapidly, and simultaneous color and polarimetric data are expected to enable the development of a number of additional important environmental data products, such as the improved ability to image the subsurface water column or maximizing wave contrast to improve oceanographic parameter retrievals of wave spectra and wave heights.
One of the main obstacles to providing good-quality polarimetric image data from a multicamera system is the ability to accurately merge imagery from the cameras to a subpixel level. This study shows that the imagery from AROSS-MSP can be merged to an accuracy better than one-twentieth of a pixel, comparing two different automated algorithmic techniques. This paper describes the architecture of AROSS-MSP, the approach for providing simultaneous color and polarization imagery in space and time, an error analysis to characterize the measurements, and example color and polarization data products from ocean wave imagery.
Since the invention of the camera and the development of flight, the field of remote sensing has matured considerably with advances in sensor and platform technologies. Today, there is a wide range of instruments and techniques providing data for various research, civil, and military purposes. In the visible regime, panchromatic or multispectral digital camera systems are commonly used. Over the last 15 years, Areté Associates has developed a number of sensor systems designed to remotely sense the environment. The Airborne Remote Optical Spotlight System (AROSS) family of sensors consists of airborne imaging systems that provide passive, high-dynamic range, time series image data, and has been developed and used successfully to characterize nearshore ocean (Dugan et al. 2001a), tidal flat (B. A. Hooper et al. 2010, meeting presentation), and riverine environments (Hooper et al. 2008; Dugan and Piotrowski 2012). The original AROSS was a 1-megapixel panchromatic charge-coupled device (CCD) imaging system used to produce high-fidelity frequency–wavenumber spectra of shoaling ocean waves (Dugan et al. 2001a). The fidelity and quality of these spectra have enabled accurate retrievals of water depths (Dugan et al. 2001b), currents (J. Z. Williams et al. 2001, meeting presentation; Piotrowski and Dugan 2002), and surf characteristics (Dugan et al. 2001c; Dugan and Piotrowski 2003). A next-generation system, AROSS multichannel (AROSS-MC), was designed to provide simultaneous spectral time series imagery with four CCD cameras in four color bands, chosen for this instrument to be red, green, blue, and near-infrared (RGB–NIR), enabling measurements of water clarity, sediment transport, and turbulence characteristics, in addition to the previous products (Hooper et al. 2005). AROSS-MC could also be configured to measure four polarization states with a panchromatic wavelength response or one bandpassed color, but not four colors and polarization simultaneously. The AROSS multispectral polarimeter (AROSS-MSP) described here is a logical follow-on system. It is a 12-camera system with three polarization measurements for each of the four colors, RGB–NIR, with initial results having been provided by Hooper et al. (2009). The AROSS-MSP camera payload is mounted in a yoke-style positioner, collects image data through the viewport of an aerial-photography airplane, and has the ability to simultaneously image the full linear polarization of the scene while accurately geolocating this imagery at modest grazing angles to provide large-area coverage and good wave visibility. The two-degree-of-freedom positioner is a key component, as it enables the camera system to be continuously pointed at a location on the water as the aircraft flies by or around it, thereby providing temporal data from a sequence of images. The resulting space–time data support algorithms that provide littoral-zone environmental intelligence products and enhance oceanographic, estuarine, and riverine characterization. The polarization products have been utilized to minimize surface reflections, enabling observation of the subsurface water column at greater depth, detect objects in high environmental clutter, increase image contrast through marine haze, enhance waterline contour detection, and improve current retrievals.
The polarization state of light provides information about imaged scenes that cannot be obtained from luminance and spectral measurements alone (Schott 2009). Recent advances utilize polarimetric measurements in military, ecological, astrophysical, biomedical, and technological applications (Mishchenko et al. 2011). There have been several polarimetric systems developed and deployed on aircraft and satellite platforms in addition to AROSS-MSP.
Other airborne polarimetric sensors include SpecTIR Corporation’s Research Scanning Polarimeter (RSP) (Cairns et al. 1999), Aerodyne Research’s Hyperspectral Polarimeter for Aerosol Retrievals (HySPAR) (Jones et al. 2005), and the Jet Propulsion Laboratory’s Airborne Multiangle Spectropolarimetric Imager (AirMSPI) (Diner et al. 1998). These systems were developed in order to better characterize aerosol and cloud properties (Cairns et al. 2003). The RSP system has been used to determine the parameters of a bimodal size distribution, identify the presence of water-soluble and sea salt particles, and retrieve the optical thickness and column number density of aerosols (Chowdhary et al. 2001). The AirMSPI system was recently modified to a photoelastic modulator–based polarimeter (Diner et al. 2010) and underwent successful flight testing aboard NASA’s high-altitude Earth Resources 2 (ER-2) aircraft.
The first satellite-flown instrument to measure polarization was Polarization and Directionality of the Earth’s Reflectances (POLDER; Deschamps et al. 1994). Developed by the French space agency, CNES, it flew aboard the Advanced Earth Observing Satellite-1 (ADEOS-1)–National Space Development Agency (NASDA) platform from November 1996 until June 1997. POLDER has a wide field of view with multiangle viewing capability. A second identical instrument was flown on ADEOS-2 with a 7-month useful lifetime in 2003. POLDER data have led to improved aerosol retrieval algorithms, the ability to distinguish between droplets and ice crystals in clouds on a global scale, and improved accuracy of vegetation parameter retrievals (Bréon et al. 2002).
The Polarization and Anisotropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (PARASOL) satellite, in operation since December 2004, carries a derivative of the POLDER sensor (Fougnie et al. 2007). The microsatellite orbits within the A-Train constellation, a series of Earth-observing satellites flying in close formation. Its ongoing scientific objectives are to improve the characterization of microphysical and radiative properties of clouds and aerosols. Its location within the A-Train enables complementary use of the different sensors on board the Aqua, CALIPSO, and CloudSat platforms. What sets AROSS-MSP apart is the ability to acquire the extra dimension of the temporal domain.
Use of multichannel data, such as from AROSS-MSP, requires accurate intercamera calibrations in addition to high-quality individual camera calibrations. Subpixel accuracy is critical for generation of composite color and polarization imagery. Each camera in AROSS-MSP is optimally focused, corrected for lens vignetting, nonuniform pixel response, relative radiometry, and geometric distortion. One of the main obstacles to providing good-quality data from a multicamera system is the ability to accurately merge imagery from the individual cameras to a common subpixel level. The cameras are mechanically aligned to one another in the laboratory to within approximately five pixels, and then in software, field data are used to fine align the multiple cameras to subpixel accuracy. For these data, the subpixel accuracy is accomplished by two approaches, one using a feature-based correlation algorithm and the other using mutual information. We show that the imagery can be merged to better than one-twentieth of a pixel with these algorithmic techniques, which have been tested and then automated. An error analysis was also performed on the polarization product errors resulting from variations in camera gain, filter orientation, and intercamera registration. One of the cameras is aligned with the inertial navigation unit for subsequent image-to-image registration and mapping of the image sequence to a geodetic surface. The mapped, corrected image data are analyzed for production of single-frame data products, such as color and polarization imagery, degree and angle of linear polarization imagery, and time series–based products, such as currents, bathymetry, and turbulence characteristics.
This paper presents the architecture of the system and the approach for providing simultaneous color and polarization imagery in space and time. The two algorithms for aligning the 12 channels are compared with statistics describing how well the images are registered to one another, an error analysis is performed to characterize our measurements and example color and polarization data products from ocean imagery are presented.
2. System requirements and description
The AROSS-MSP design was adapted from the original AROSS and AROSS-MC designs to accommodate the necessary modifications for use on commercially available aerial platforms, such as the DeHavilland Twin Otter aircraft, which was used for the data analyzed here. The AROSS system was specifically designed to provide time series imagery of ocean waves suitable for retrieval of meteorological and oceanographic (METOC) parameters over large areas. Large areas, with individual image footprints on the order of 2 km2, can be covered using these AROSS sensors by looking out from the aircraft at grazing angles (GA) on the order of 30° to provide large area coverage and good wave visibility. The original system requirements for AROSS and AROSS-MC are listed in Table 1. AROSS-MSP is designed to match or exceed these requirements with consideration given to the restrictions imposed by the commercial platform. An initial description of the system architecture can be found in Hooper et al. (2009), but a brief overview of the main components of the system is included below.
The system architecture consists of a rack-mounted control PC that controls and maintains pointing of the positioner using position and attitude data from an inertial navigation system (INS) composed of a GPS receiver and an inertial measurement unit (IMU). The INS, a C-MIGITS-III from Systron Donner, uses a Kalman filter to merge the high-frequency data from the IMU with the low-frequency data from the GPS unit. This configuration overcomes the drift associated with an IMU and the low data rate of a GPS, and provides accurate attitude. The computer also controls acquisition of the imagery through commands to the cameras, stores all received data, including the attitude, position, GPS time, camera parameters, trigger times, and imagery, to removable hard disks. The two-degree-of-freedom azimuth–elevation positioner is a yoke-style positioner (SPS-500) from Atlantic Positioning Systems, now Cobham Sensor Systems, that enables the payload to be pointed to an aimpoint on the earth’s surface and the imagery to be georectified. This is accomplished by a simple proportional–integral–differential control filter that compares the intended aimpoint to the instantaneous one, computes the attitude vector error, and commands the positioner motors to correct for it.
To maximize the spatial resolution while collecting multichannel data, a multicamera approach was chosen for the system design. This approach is made possible by the compact size of the Pixelfly camera from Cooke Corporation and, as such, the AROSS-MSP design is built around this camera, which can use any C-mount lens. The cameras are in an environmentally sealed housing and are hard mounted to the IMU. The data collection/control PCs communicate through TCP/IP commands with a master computer providing the operator interface. The master computer obtains time from the GPS receiver and daisy-chains the time to each of the other PCs. The Pixelfly quantum efficiency (QE) cameras are used because of their small form factor, high-dynamic range, and our previous experience with their computer control and calibration. Schneider Compact f/1.9 35-mm ruggedized C-mount lenses were chosen for their high quality, ruggedness, and compact size. Kinematic tip–tilt mounts were designed and built for the cameras to allow them to be aligned in the laboratory to within several pixels for near-100% overlap or, alternately, to mosaic the 12 channels. We chose the overlap configuration to be able to display the combination channels of RGB and degree and angle of linear polarization (P and Ψ, respectively) with the largest available area coverage for this lens and camera configuration.
Spectral bandpass filters were chosen to cover the 400–1000-nm response of the cameras’ silicon photodetectors. The color/polarization filters are single-substrate linear polarization filters (Omega Optical) with broadband interference coatings to select 100-nm bands centered near 450, 550, and 650 nm for blue, green and red color bands, and a 200-nm-wide band centered near 800 nm in the near-infrared. The linear polarization filters are oriented with respect to the horizontal at 30°, 90° and 150° [counterclockwise (CCW)]. These filters are mounted in front of the camera lenses on the front plate of the three-camera module shown in Fig. 1a. The design allows one to easily change the color and polarization filters to access different color bands and/or polarization states. The overall system efficiency is a combination of the quantum efficiency of the CCD, the transmission of the lenses, and the color and polarization filters, and is displayed for each spectral band in Fig. 1b.
3. Sensor calibration
Camera calibration is a fundamental requirement of all photogrammetric systems. To produce high-quality images for use in quantitative analysis, robust individual camera and intercamera calibrations are necessary. The calibrations required by a multicamera system can be grouped into two categories: single- and intercamera calibrations. The first group comprises most of the tests that would normally be performed on a standard single sensor (e.g., image quality, flat field, radiometric calibrations, distortion) and are the intrinsic parameters of the cameras. The second group of measurements is concerned with the relative position and angular pointing of the cameras to one another and to the IMU. We take a systems approach to the instrument using both laboratory and in situ measurements to arrive at the best calibration (Cramer 2008).
The intrinsic camera parameters were measured using standard grid patterns and techniques developed at Areté Associates, based on work by Tsai (1987). These calibrations enable the correct mensuration of the remotely sensed scene. The radiometric calibrations were made using an integrating sphere and a grating spectrometer. Radiometric calibrations allow for the correction of the irradiance rolloff that is present in all cameras due to the lens aperture, as well as accurate spectral measurements. The modulation transfer function (MTF) of each camera was measured using the knife-edge method and a random target method (Daniels et al. 1995). The MTF defines the limits of the sensor’s spatial resolution and, thus, its ability to resolve fine detail.
After each of the individual camera heads is calibrated, the cameras must be mounted in the three-camera modules and each of the modules aligned to a reference camera. A target projector that produces a collimated array of regularly spaced spots is used for the alignment. Following this coarse mechanical alignment, each module had polarizers oriented and then fixed at 30°, 90°, and 150° with respect to horizontal of each camera using a polarized laser and a Thorlabs polarimeter. Once the camera modules are assembled into the payload, a calibration of the polarization filter orientations is performed to verify filter orientation. Since the polarization filters are not oriented perfectly at 30°, 90° and 150°, there is systematic error introduced into the P and Ψ calculations. The systematic P error is a maximum of approximately 0.03 when the input is completely polarized, matching the measured result. The systematic Ψ error oscillates about zero, reaching a maximum of 0.9° in the blue channel and less than 0.5° for the red, green, and NIR wavelengths. The P and Ψ errors are slowly varying, and for homogeneous imaged scenes with small variations in the polarization state, systematic errors introduce a bias to the image but have little effect on relative values.
The angular and position offsets of the cameras are then determined using flight data and in situ measurements (Evans et al. 2000). This is accomplished using a number of fiducial targets placed throughout the system’s field of view when it is at its nominal flight altitude and camera depression angle. Normally this is done around the airport from which the aircraft operates. The true locations of these target points are measured with a kinematic GPS. Using the imagery of these targets and data from the INS, the boresight offset between the reference camera and the IMU is calculated. Using these same targets and other tie points in the scene, the intercamera alignment can be measured such that composite images can be generated with subpixel fidelity (Campion et al. 2002).
The next section discusses how optical polarization measurements are made, the importance of intercamera alignment for the production of color and polarization composite imagery, and an example of open ocean color and polarization imagery.
4. Color and polarization imagery
Color and polarization imagery requires the combination of images from different cameras into one image. These images must be registered to one another to better than about one-tenth of a pixel to avoid color anomalies or dilution of the polarization product (Persons et al. 2002) and, in extreme cases, the registration should be about one-twentieth of a pixel (Smith et al. 1999). A color anomaly, for instance, would appear as an “Italian flag” (green and red bordering white) where a white patch was imaged. An example of this color anomaly in a riverine scene is shown in Hooper et al. (2009). Two image registration algorithms were compared for aligning these 12 cameras with one another: a feature-based correlation method (Campion et al. 2002) and a mutual information-based method (Collignon et al. 1995; Viola and Wells 1995). Both image registration algorithms were capable of registering the images to better than one-tenth of a pixel and in many cases better than one-twentieth of a pixel. The red vertical polarization (90°) channel was chosen as the reference camera, and all other cameras were registered to it.
a. Measurement of optical polarization
The polarization characteristics of light are described by a four-element Stokes vector, S = [S0 S1 S2 S3]T, where S0 is the total light intensity, S1 is the intensity with horizontal or vertical polarization, S2 is the intensity polarized in a 45° sense, and S3 is the intensity of light associated with circular polarization [ignored here, because there is little circular polarization (~1%) in the marine environment (Hansen 1971; Plass et al. 1976; Sparks et al. 2009) and with no quarter-wave plates in the system we make no circular polarization measurement]. The intensity of light that emerges from a linear polarizer oriented at angle θ to the horizontal is described by the Stokes equation (Matchko and Gerhart 2005):
By using three nonaligned linear polarizers with angles (θ1, θ2, θ3), the measured intensities can be written as
From this the Stokes parameters can be found as
For AROSS-MSP, the polarization angles were chosen to be 30°, 90°, and 150° and the Stokes terms can be found from Eqs. (5)–(7). This filter configuration was chosen for two reasons. First, the 60° offset between filters efficiently spans polarization space for three independent measurements, and it is a modified Fessenkov method: a 0°, 60°, and 120° configuration (Fessenkoff 1935; Prosch et al. 1983). Second, the configuration provides one filter with a high degree of suppression of the horizontally polarized surface-reflected light, enabling improved imaging of the subsurface water column.
Imagery for the degree and angle of linear polarization is obtained from the three polarization measurements for each color by first calculating the Stokes parameters (Hecht 2001):
These values are then combined into the following equations, which describe the degree P and angle Ψ of linear polarization:
The P values lie in the range 0.0–1.0, while the Ψ values vary from −90° to +90°.
b. Intercamera alignment
Since the physical spacing of the cameras (cm) is much less than the pixel resolution on the ground (typically a meter for a nominal flight altitude of 10 000 ft), we assume that the intercamera alignment can be captured by a simple rotation R between cameras:
where roll is a rotation about the z axis, pitch is a rotation about the x axis, and yaw is a rotation about the −y axis. A feature-based correlation (CORR) technique and a mutual information (MI) technique were used to determine the rotation between cameras. The correlation technique requires high contrast data but does not need known fiducials in the scene. The processing flow for rotation estimation is as follows: rotate camera n image, compare overlap region with reference camera, and iterate to minimize differences between images. We repeat this process for 11 of the 12 cameras to determine a rotation to the 12th or reference camera, camera 3, and the red vertical polarization channel. Once we have the rotation matrices, we can apply them to register all of the images to camera 3’s reference frame. Using 50 frames of image data for rotation estimation yields results better than a twentieth of a pixel (standard deviation ~0.05 pixels) as tabulated in Table 2.
We next describe how well we can quantitatively characterize the color and polarization imagery from AROSS-MSP through an error analysis.
c. Polarization state errors due to image misregistration
For AROSS-MSP, the linear polarization state (degree and angle of linear polarization; P and Ψ) is calculated using a combination of three separate images from linear polarization states I(30°), I(90°) and I(150°). The fidelity of the polarization products is partly dependent on how well the images are registered to one another on a pixel-to-pixel basis. We quantify the effect of misregistration by modeling the calculated P and Ψ while varying the fractional overlap of one channel with a neighboring pixel from another channel. We define the polarization state of both a target pixel and a neighboring pixel. From this, we calculate the expected intensities for analyzers at 30°, 90°, and 150°. We set the measured intensities for the 90° and 150° channels to the expected intensities for the target pixel. We calculate the measured intensity for the 30° channel by summing fractions of the expected intensities for the target pixel and neighboring pixel. Using the measured intensities, we calculate the degree and angle of linear polarization as a function of the fractional overlap.
To determine the upper limits of our polarization error, we look at extreme cases where the pixel of interest is completely polarized at angles 30°, 60°, 90°, and 120°. We set the neighboring pixel to be either unpolarized or completely polarized at the same range of angles. Table 3a displays the percent difference in measured degree of polarization P for a tenth of a pixel misregistration (the upper limit of our registration accuracy). Table 3b presents the degree difference in measured angle of polarization Ψ. From these tables, we see that the error in P can be as high as 18.8% and the error in Ψ can be as high as 2.6°.
During normal operation of AROSS-MSP, we actually measure a much closer range of values. In a typical scene, a large difference between neighboring pixels is 0.1 in P and 10° in Ψ. We set P of the pixel of interest to 0.5 and of the neighboring pixel to 0.6. We set Ψ to a range of values for the pixel of interest and define Ψ of the neighboring pixel to be 10° greater. Figures 2a and 2b plot the percent difference in P and the degree difference in Ψ over a range of pixel misregistration values. At a tenth of a pixel misregistration, the highest P error is about 2% and the highest Ψ error is 0.5°. These values are similar to commercially available polarimeters (see, e.g., Polaris Sensor Technologies, Inc.).
An example of this well-registered imagery is shown in Fig. 3, where an RGB color image in the vertical polarization channel, a degree of linear polarization P image, and an angle of linear polarization Ψ image of an open ocean scene is presented. Noteworthy in these images are the white water (with no color anomalies) caused by breaking waves on the surface, the longer swell waves, and the short wind waves all captured in this 396 m × 493 m image. The vertical polarization channel was chosen to suppress the horizontally reflected light that would be prevalent at this angle of incidence, enabling a sharper view of the surface and subsurface light. Also evident is a “cloud” feature in the middle of the color image. This cloud is a thin wispy atmospheric cloud blown through the scene at fairly low altitude and is not to be confused with a bubble cloud in the water. Observing an animation of mapped imagery in the vicinity of the cloud shows the cloud moving at 11 m s−1, which was the observed wind speed (direction from 343°) from NDBC buoy 46029 located just north of the imagery. The mapped time series imagery was also Fourier processed to observe the wave energy and direction in a frequency–wavenumber plot for both the long wavelength swell and shorter wind waves. The wavelength and direction of both swell and wind waves matched the values from the NDBC buoy. The imagery is located 47 km south-southeast of the buoy and is about 21 km offshore from Falcon Rock, Oregon (55 km south of the mouth of the Columbia River). The sensor was at an altitude of 930 m, a grazing angle of 37° (polarization angle for an air–water interface), and an azimuthal angle of 47° east of north. The sun was at an elevation of 58° and an azimuth of 231°, 184° from the look direction of AROSS-MSP, putting the sun essentially behind the sensor. Less evident is wind streaking at the surface, barely visible on the right side of the image as linear foam streaks.
The P image represents the degree of linear polarization of light reaching the airborne sensor and is a combination of the path polarization between the sensor and the water, the reflected and transmitted sky polarization, and the polarization of transmitted light upwelling from below the water surface. The white water has a low degree of polarization because the many bubbles that make up the foam/white water represent a highly depolarizing, near-Lambertian surface and therefore appear dark in the P image. The “cloudy” feature is also noticeable in the P image as slightly less polarized than the water surface. The wave structure and wind streaks are still evident in this P image as well. Note that P is sensitive to wave slopes in the in-look direction (top to bottom) of the image. The range of polarization is from 0.15 to 0.4. Wave slopes that exhibit higher P (such as the back side of the waves facing the top of the image) are those that are closer to Brewster’s angle and those slopes that have a lower P (those slopes facing the bottom of the image) can have either higher or lower slopes, as P is multivalued as a function of angle. Although this image was collected near Brewster’s angle at an altitude of 3051 ft, the atmosphere was quite hazy with marine spray likely reducing the polarization at the sensor. Also visible in the P image is a slight gradient from upper right to lower left. This gradient is due to polarization in the path radiance or atmosphere between ocean surface and sensor.
The Ψ image also shows evidence of the wave structure of the ocean surface and is sensitive to slopes in the cross-look direction (left to right) of the image. The range of angle of linear polarization is from +15° to −20°. The predominance of Ψ values are close to zero, as the reflected light at Brewster’s angle is from horizontally polarized light with a Ψ value near zero. Wave slopes facing the left side of the image have a positive polarization angle and those facing the right side of the image a negative polarization angle. Also visible in the Ψ image is the same slight gradient from upper right to lower left as seen in the P image. This gradient is due to polarization in the path radiance or atmosphere between ocean surface and sensor. Of interest in comparing P and Ψ imagery is the sensitivity of the wave slopes to the in-look direction slopes in P and the cross-look direction slopes in Ψ. The utility of this image data in determining wave slopes and wave heights is the subject of recent papers (Zappa et al. 2008; Pezzaniti et al. 2009), and the subject of one of the author’s (BV) doctoral dissertation.
We have successfully designed, developed, and fielded an airborne multispectral polarimeter and collected image data over the open ocean and in estuaries and rivers. AROSS-MSP is a 12-camera airborne sensor system capable of providing passive, high-dynamic range, time series image data simultaneous in color and polarization measurements for the full linear polarization of the imaged scene in RGB–NIR wavelength bands. Color filters and polarization filter orientation were chosen to optimize measurement of both the color and polarization light fields reflected and emanating from a water surface—be it ocean, tidal flat, or river. Color and polarimetry, from airborne remotely sensed time series imagery, provide unique information for retrieving dynamic environmental parameters over several square kilometers on the earth’s surface, including the improved ability to minimize surface reflections, enabling observation of the subsurface water column at greater depth, increasing image contrast through marine haze, and enhanced waterline contour detection. Additionally, the time domain of these image data products allows for detection of objects in high environmental clutter, improvement of current retrievals, and maximizing wave contrast for retrievals of wave directional spectra and wave heights.
Providing good-quality polarimetric image data from a multicamera system requires accurately merged imagery from the cameras to a subpixel level. We show that the imagery can be merged to an accuracy better than one-tenth of a pixel, comparing two different automated algorithmic techniques. The registration of the imagery is important so that color imagery does not suffer from color anomalies (each pixel has properly registered wavelength components) and the polarization products are properly calculated (no quantitative dilution of degree of linear polarization P and angle of linear polarization Ψ because of misregistered pixels). The mapped, corrected image data are analyzed for production of single-frame data products, such as color and polarization imagery, degree and angle of linear polarization imagery, and time series–based products, such as currents, bathymetry, and turbulence characteristics.
An error analysis was presented to characterize our measurements. During normal operation of AROSS-MSP, a typical scene might exhibit a difference between neighboring pixels of 0.1 in P and 10° in Ψ. At a tenth of a pixel misregistration, the highest P error is about 2% and the highest Ψ error is 0.5°. These values for AROSS-MSP are similar to commercially available polarimeters.
Finally, an example of color and polarization data products from ocean imagery was presented. The RGB vertical polarization color composite image faithfully captures white water on the surface, long wavelength swell, and short wind waves—all without color anomalies. The degree P and angle Ψ of linear polarization imagery were also presented. The P imagery is sensitive to wave slopes in the in-look direction, while the Ψ image is sensitive to those slopes in the cross-look direction. The range of polarization P and Ψ is from 0.15° to 0.4° and +15° to −20°, respectively, for the ocean surface image.
Future research will include investigating the use of AROSS-MSP image data for wave slope and wave height retrievals, optimizing input for hydrodynamic models, and imaging near-surface turbulence in turbid water.
This work was funded by the Office of Naval Research Grant N00014-04-C-0261.
We gratefully acknowledge support for this work from an ONR Phase II STTR under Contract N00014-04-C-0261 and Phase III support under Contract N00014-08-C-0675.
It is with great sadness that we dedicate this article to the late John P. Dugan, one of the originators of this work and a coauthor. With his passing we—his friends and colleagues—and the ocean, estuarine, and riverine remote sensing fields have lost a national leader.
Current affiliation: TASC, Inc., Chantilly, Virginia.
Current affiliation: SGT, Inc., Greenbelt, Maryland.