## Abstract

The Multiangle Imaging Spectroradiometer (MISR) views the earth with nine cameras, ranging from a 70° zenith angle viewing forward through nadir to 70° viewing aft. MISR does not have an operational cloud optical depth retrieval algorithm, but previous research has hinted that solar reflection measured in multiple directions might improve cloud optical depth retrievals. This study explores the optical depth information content of MISR’s multiple angles using a retrieval simulation approach. Hundreds of realistic boundary-layer cloud fields are generated with large-eddy simulation (LES) models for stratocumulus, small trade cumulus, and land surface–forced fair-weather cumulus. Reflectances in MISR directions are computed with three-dimensional radiative transfer from the LES cloud fields over an ocean surface and averaged to MISR resolution and sampled at MISR 275-m pixel spacing. Neural networks are trained to retrieve the mean and standard deviation of optical depth over different size pixel patches from the mean and standard deviation of simulated MISR reflectances. Various configurations of MISR cameras are input to the retrieval, and the rms retrieval errors are compared. For 5 × 5 pixel patches the already low mean optical depth retrieval error for stratocumulus decreases 41% and 23% (for 25° and 45° solar zenith angles, respectively) from using only the nadir camera to using seven MISR cameras. For cumulus, however, the much higher normalized optical depth retrieval error only decreases around 14%. These small improvements suggest that measurements of solar reflection in multiple directions do not contribute substantially to more accurate optical depth retrievals for small cumulus clouds. The 3D statistical retrievals, however, even with only the nadir camera, are much more accurate for small cumulus than standard nadir plane-parallel retrievals; therefore, this approach may be worth pursuing.

## 1. Introduction

Operational solar reflectance cloud retrieval techniques still use one-dimensional (1D) radiative transfer theory (e.g., Nakajima and King 1990). Numerous theoretical studies based on increasingly realistic cloud fields have shown that the retrieval accuracy of optical depth can be poor for broken clouds or for stratiform clouds with oblique solar or viewing angles due to three-dimensional (3D) radiative transfer effects (Chambers et al. 1997; Loeb et al. 1998; Zuidema and Evans 1998; Várnai 2000; Várnai and Marshak 2001; Oreopoulos et al. 2000; Iwabuchi and Hayasaka 2002; Zinner and Mayer 2006). Other studies have shown that the statistics of satellite radiance observations are inconsistent with 1D radiative transfer or that 1D retrievals produce unphysical statistical dependences (Loeb and Coakley 1998; Várnai and Marshak 2002; Genkova and Davies 2003; Horváth and Davies 2004; Várnai and Marshak 2007). One of the clearest indications that 1D optical depth retrievals are inadequate is that 1D radiative transfer disagrees with multiangular satellite radiances for more than 80% of the 275-m pixels for oceanic water clouds (Horváth and Davies 2004).

Recently, cloud retrieval methods have been proposed that explicitly take cloud inhomogeneity and three-dimensional (3D) radiative transfer into account for stratocumulus clouds (Iwabuchi and Hayasaka 2003; Cornet et al. 2004; Marchand and Ackerman 2004; Zinner et al. 2006). Cornet et al. (2004) simulated multispectral, single-angle radiances from hundreds of bounded cascade stochastic clouds with 3D radiative transfer and performed retrievals of the mean and standard deviation of optical depth and effective radius with neural networks. They found that the inclusion of radiance standard deviation from 250-m pixels significantly improved the retrieval accuracy of mean optical depth for 1000-m pixels. Iwabuchi and Hayasaka (2003) developed an algorithm to retrieve mean optical depth and effective radius from the radiance of the target pixel and neighboring pixels at two wavelengths. A multiple regression model was trained on radiances from Monte Carlo radiative transfer in stochastic stratocumulus fields. They found substantial improvements in retrieval accuracy from using the nonlocal regression compared with the single-pixel regression. As part of a multiangular closure study, Marchand and Ackerman (2004) developed a technique for retrieving a 3D field of stratocumulus liquid water content (LWC) from AirMISR data using 3D radiative transfer. Starting with the 1D solution, the algorithm iteratively adjusted the LWC of each pixel to better match the radiance field computed with 3D radiative transfer. Zinner et al. (2006) developed a technique to retrieve 3D stratocumulus cloud properties from an adiabatic model and high-resolution (15 m) radiance data based on the Green’s function deconvolution idea of Marshak et al. (1998). A series of partial deconvolutions was done, forward Monte Carlo 3D radiative transfer was calculated for each resulting cloud field, and the deconvolved cloud field that best matched the measured image was chosen. This algorithm shows impressive results, but is restricted to single layer stratocumulus and small solar zenith angles.

A few recent studies have used multidimensional radiative transfer in closure studies with Multiangle Imaging Spectroradiometer (MISR) (Diner et al. 1998) data. Zuidema et al. (2003) reconstructed 3D extinction fields of tropical cumulus congestus from MISR stereo cloud top height and 1D radiative transfer retrievals of optical depth from the nadir camera. Domain-average MISR reflectances simulated with 3D transfer did not agree well with the MISR observations, perhaps because of a lack of 3D cloud surface detail. Cornet and Davies (2008) performed 3D radiative transfer on a 2D slice from a high-resolution stereophotogrammetric reconstruction of a tropical convective cloud and compared the calculated radiances with radiances from the nine MISR cameras across the slice. They found good agreement with reasonable assumptions about the internal extinction field and showed that the inferred optical depth is much higher than obtained from 1D radiative transfer. Marchand and Ackerman (2004) performed a closure study for a stratocumulus cloud by computing 2D radiative transfer from a cloud scene constructed from ground-based radar, lidar, microwave radiometer, broadband shortwave flux, wind profiler, and in situ aircraft cloud droplet distribution data. They found that the mean simulated MISR reflectances agreed with the observed ones to within the measurement uncertainty after the effect of finescale cloud top topography was taken into account.

In the current study we devise a method to retrieve cloud optical depth from simulated MISR data. The purpose of this study, however, is not the development of a retrieval algorithm per se, but an exploration of the value of MISR multiangular information to boundary-layer cloud optical depth retrievals. We do this in a framework of 3D radiative transfer applied to a variety of realistic cloud structure obtained from large-eddy simulation (LES) models. Datasets relating statistics of simulated MISR reflectances and true optical depth are calculated from hundreds of LES scenes for a variety of stratocumulus clouds, low-cloud-fraction marine trade cumulus, and higher-cloud-fraction fair-weather cumulus. We retrieve the mean and standard deviation of optical depth over various size “pixel patches” from the mean and standard deviation of reflectances from seven MISR cameras using a neural network algorithm. This approach is similar to that of Cornet et al. (2004), except with the addition of multiangular data. The optical depth retrieval error as a function of pixel patch size is calculated for different input configurations of MISR cameras so that the retrieval accuracy with multiple angles can be compared to that with nadir-only reflectances.

## 2. LES cloud fields

Training a cloud retrieval method that includes 3D radiative transfer requires a large number of 3D cloud fields. We choose to use large-eddy simulation models to generate the cloud fields because they can provide realistic cloud structure and have the flexibility to produce a wide variety of cloud types. We use cloud fields from three types of simulations: (i) marine stratocumulus simulated for three different soundings and several cloud condensation nuclei (CCN) concentrations, (ii) small marine trade cumulus simulated for 18 soundings from the Rain in Cumulus over the Ocean (RICO) experiment (Rauber et al. 2007), and (iii) fair-weather cumulus over land forced by surface fluxes with three different applied wind profiles. The emphasis in the selection of LES cloud fields is on obtaining a large number of scenes with a variety of boundary-layer cloud structure for training and testing the retrieval method. Nevertheless, the three sets of LES cloud fields undoubtedly represent a tiny fraction of the cloud structure in real boundary layer clouds and, so, should be thought of as examples of possible clouds rather than a comprehensive set. Summary characteristics of the LES cloud scenes are listed in Table 1. All of the LES fields have horizontally periodic boundaries.

The stratocumulus simulations were performed with a LES model with bin microphysics for a previous study (Ackerman et al. 2004). The dynamical core of the model is described in Stevens et al. (2002), and the cloud microphysics model is described in Ackerman et al. (1995). Radiative transfer during the large-eddy simulations is calculated for each column every minute with a two-stream model. The vertical stretched grid has 6-m grid spacing near the surface and the inversion layer. There are 20 particle bins from 0.01 to 5 *μ*m radius for ammonium bisulfate aerosols and from 1 to 500 *μ*m radius for activated cloud droplets. The cloud droplet effective radius is calculated directly from the cloud droplet bin concentrations. The three basic nocturnal simulations are based on idealizations of meteorological conditions during the Atlantic Stratocumulus Transition Experiment (ASTEX), the First International Satellite Cloud Climatology Project (ISCCP) Regional Experiment (FIRE-I), and the Second Dynamics and Chemistry of Marine Stratocumulus field study (DYCOMS-II). The LES fields used here are from simulations with the initial aerosol concentration set to 40, 75, 150, and 300 cm^{−3} for the ASTEX and FIRE-I series and additionally to 20 cm^{−3} for DYCOMS-II. The cloud droplet concentration increases with aerosol concentration, which thus decreases the effective radius and increases the optical depth [for fixed liquid water path (LWP)]. Furthermore, in the ASTEX and FIRE-I simulations the LWP increases with aerosol concentration. The lifting condensation level is 340, 250, and 620 m and the inversion height is 700, 600, and 840 m for the ASTEX, FIRE-I, and DYCOMS-II simulations, respectively. LWC and effective radius fields are selected for seven times at hourly intervals (from 2 to 8 h) in each simulation, for a total of 91 scenes.

The RICO small marine cumulus simulations are performed with the University of California, Los Angeles (UCLA) LES model developed by B. Stevens. The base UCLA LES code is described by Stevens et al. (2005). This version (Savic-Jovcic and Stevens 2007) incorporates a simple model of microphysical precipitation processes using a prognostic drizzle mass mixing ratio and number concentration. The mixing ratio of cloud water is diagnosed assuming the cloud droplets are in equilibrium and have a fixed number concentration. Diabatic heating is due to condensation on and evaporation from cloud droplets and drizzle and a simple model of longwave radiative flux divergence.

The UCLA LES is run for a series of 8-h simulations based on NCAR RICO soundings launched from Barbuda from 7 December 2004 to 24 January 2005. The initial soundings are selected with minimum CAPE of 20 J kg^{−1}, maximum inhibition of 15 J kg^{−1}, and minimum equilibrium level pressure of 650 mb (to eliminate deep convection). There are no large-scale forcings other than parameterized ocean surface fluxes and the simple cloud radiative cooling parameterization. The UCLA LES code is modified to initialize the boundary-layer potential temperature field with a random field that is horizontally fractal (power law power spectrum) with rms fluctuations of 0.25 K at the peak level of 250 m. Of the simulations initiated with 26 soundings, 18 produced clouds after the initial spinup period of about one hour. The 3D cloud droplet LWC field for the 18 simulations is output every 15 min of simulation time (over which the fields change substantially), for a total of 464 fields. The cloud droplet concentration is set to 50 cm^{−1} (which is a typical in situ measured value for RICO) in the LES model and for deriving the droplet effective radius from the LWC, assuming a gamma size distribution. The cloud fraction of the RICO LES scenes is small (see Table 1), but this is fairly consistent with satellite observations from the RICO experiment (Zhao and Di Girolamo 2007).

The land cumulus LWC fields are obtained from simulations with the UCLA LES model (Stevens et al. 2005) as done for previous research (Pincus et al. 2005). Although the focus of the current research is on optical depth retrievals of oceanic boundary layer clouds, these land cumulus clouds are included to consider deeper clouds and higher cloud fractions than the RICO cumulus fields provide. This version of the UCLA LES has no microphysical processes, and the cloud liquid water mixing ratio is simply the excess of the total water mixing ratio above saturation. Thus, the maximum LWC might be unrealistically high since precipitation processes are ignored. We use a droplet concentration of 50 cm^{−1} with the gamma distribution assumption since we will pretend these cloud fields are oceanic. There are three types of LES runs for these clouds: constant 5 m s^{−1} wind (no shear) applied in the sounding, vertical wind shear, and wind direction changing with height. A total of 210 LWC fields are selected from the three initial wind profiles, five simulations with different initial temperature perturbations, and 14 times from each simulation from 325 to 715 min. Table 1 shows that the land cumulus cloud fields have higher cloud fraction, deeper clouds, and higher optical depths than the RICO trade cumulus.

## 3. Radiative transfer modeling

Radiative transfer is calculated for the LES cloud scenes to approximate MISR images. MISR has nine cameras (Diner et al. 1998): one viewing nadir (An), four viewing in the forward direction relative to the spacecraft motion (designated Af, Bf, Cf, and Df in increasing zenith angle), and four in the aft viewing direction (Aa, Ba, Ca, and Da). The zenith viewing angles at the surface are 0°, 26.1°, 45.6°, 60.0°, and 70.5° for cameras An, Af/Aa, Bf/Ba, Cf/Ca, and Df/Da, respectively. The cross-track instantaneous field of view (IFOV) and pixel spacing is 275 m for all of the off-nadir cameras and 250 m for the nadir camera, though the An imagery is remapped to the same 275-m image base as the other cameras. The along-track IFOVs are 236 m/*μ* (where *μ* is cosine of the viewing zenith angle), except for the nadir camera which is 214 m. The pixel spacing in the along-track direction is 275 m for all cameras. MISR has four spectral bands centered at 443, 555, 670, and 865 nm. MISR data is available globally at the full 275-m pixel spacing only at 0.67 *μ*m, and both ocean and vegetated surfaces are fairly dark at this wavelength. We choose 0.67 *μ*m as the single wavelength considered here, though we expect that the cloud retrieval simulations should be fairly independent of wavelength over the ocean.

A small MISR image is simulated for each LES field. Perhaps the greatest approximation in this procedure is that the LES cloud fields do not change, either by advection or temporal evolution, over the 7.0-min observing period of MISR (from cameras Df to Da). One could assume that the operational MISR cloud wind speed algorithm (Horváth and Davies 2001) could be used to remove the cloud advection by remapping to a common time. The temporal evolution of convection at the spatial scale considered, however, is not negligible. Nevertheless, we ignore the cloud temporal evolution because we do not have LES fields at the approximately 1-min intervals that would be required to simulate this effect.

Three-dimensional radiative transfer is calculated with the Spherical Harmonics Discrete Ordinate Method (SHDOM) (Evans 1998). The solar zenith angles and relative azimuth angles (between solar and viewing azimuths) are appropriate for the MISR viewing geometry and are listed in Table 2. Only one solar zenith angle is chosen for the RICO simulation owing to computational limitations and because one sun angle is appropriate for the RICO experiment. To increase the number of scenes for the stratocumulus runs, we compute radiative transfer for two azimuthal orientations (0° and 90°) for each LES scene. For the cumulus runs, for which wind shear is important for cloud morphology, the correct orientation between the wind direction in the LES modeling and the MISR viewing azimuth is maintained. Radiances are calculated at the SHDOM domain top of 16 km for nine viewing angles, *μ* = 1.0 and *μ* = 0.898, 0.700, 0.500, and 0.334 for the two viewing azimuths separated by 180°. The solar flux is set to *π* so that the radiance output by SHDOM has reflectance units. The number of SHDOM discrete ordinates is *N _{μ}* = 8,

*N*= 16, and the cell splitting accuracy is 0.05. The base grid for the land cumulus simulations is 80 × 80 horizontally to conserve memory, but the full 160 × 160 grid is sampled during the SHDOM cell splitting procedure. Single scattering from gamma size distributions of cloud droplets is computed with the Mie code in the SHDOM distribution. Molecular Rayleigh scattering is included, but aerosol effects are not.

_{ϕ}To investigate the cloud retrieval information in multiangular satellite data, it is important to consider surface reflection with angular variation, that is, non-Lambertian. We choose to use an ocean surface reflection model because the dark, relatively uniform surface is favorable for visible satellite retrievals and the ocean is ubiquitous. SHDOM includes an ocean surface bidirectional reflection function, which depends on near-surface wind speed and chlorophyll-*α* pigment concentration (set to zero here). For the RICO runs the surface wind speed is set to the value in the LES initial sounding. For the stratocumulus and land cumulus runs the near surface wind speed is chosen randomly for each SHDOM run from a Rayleigh distribution. The Rayleigh distribution is obtained from the square root of the sum of squares of two zero-mean Gaussian distributions with a rms of 8 m s^{−1}. In all cases, the surface wind speed is not allowed to fall below 5 m s^{−1} to avoid the narrow specular peak in surface reflection that cannot be resolved by the limited angular resolution used in SHDOM.

The SHDOM radiance fields are transformed to MISR images by remapping and averaging. The SHDOM radiances are output on a horizontal grid at the LES column resolution at an altitude of 16 km. The radiances for the MISR directions are first reprojected to match at the surface using the periodicity of the LES scene domain (if collocated at 16 km, the D cameras would view the surface 45 km from the nadir view). The MISR pixels are formed by a remapping/averaging procedure that accounts for the MISR along-track direction relative to the LES orientation. The SHDOM radiances are averaged to MISR pixels by bilinearly interpolating the SHDOM radiances on a 25 × 25 grid within each (275 m)^{2} MISR pixel sample. The MISR IFOVs for each camera are modeled with uniform (or boxcar) averaging. To avoid boundary issues, we use the assumption of periodicity to create MISR images that are larger than the original LES scene. Figure 1 shows examples of the simulated MISR images for each cloud type. The stratocumulus simulation shows the increased reflectance in the forward scattering direction due to the Mie phase function. The cumulus images show the large parallax shift of the taller clouds (wrapping around the periodic boundaries for the larger viewing angles). The degraded resolution in the along-track direction is clearly seen in the more oblique camera views.

The MISR simulation procedure also calculates the “true” optical depth for use in the retrieval algorithm training and testing. The true optical depth is defined as the geometric optics optical depth (from extinction = 1500 km^{−1 }*μ*m g^{−1} m^{3} LWC/*r _{e}*, where

*r*is the effective radius). This optical depth is calculated for the LES columns and then averaged to the MISR nadir pixels in a procedure similar to the one for radiances.

_{e}Given these datasets of cloud extinction and MISR reflectances, one way to start exploring the optical depth information in multiangular radiances is to investigate the relationship between the reflectances and optical paths at the nine MISR directions. A compact way to show the relationship is as a 9 by 9 correlation matrix over the MISR directions. The correlation between reflectance and optical path is particularly interesting for small cumulus clouds; thus we show it for all the RICO clouds in Fig. 2. Two correlation matrices are shown, one for the MISR camera pixels collocated at 1.25-km height (roughly the middle of the RICO cloud layer) and the other for the camera pixels collocated at cloud top (or at 1.25 km for clear-sky pixels). The cloud top collocation uses the true cloud top height to avoid large uncertainties associated with stereo-matching algorithms for small cumulus clouds. To stay in the linear radiative transfer regime, as appropriate for linear correlation analysis, only those pairs of simulated MISR pixels with optical paths between 0.5 and 15 and reflectances less than 0.40 are included in the correlation matrices. The correlation with optical depth (i.e., the optical path along the An camera) is highest (0.91) for the nadir reflectance. The correlation is quite low for the most oblique cameras. The decrease in correlation with camera separation is slower for the cloud top height collocation, but has a similar pattern. These correlation matrices perhaps indicate that for very small cumulus clouds most of the information about single MISR pixel optical depth is obtained from the MISR cameras near the nadir.

## 4. Neural network retrieval procedure

We retrieve the first two moments of the optical depth distribution for pixel patches of varying size from the two moments of the MISR reflectances for various sets of cameras. We believe that the approach of considering an area of several MISR pixels for the cloud retrieval rather than a single MISR pixel is appropriate owing to the inherently three-dimensional nature of clouds, especially cumulus. From a 3D radiative transfer viewpoint, cumulus clouds are not flat objects at which MISR pixels from different cameras can be collocated. The solar and MISR viewing geometries interact with the cloud geometric depth to provide horizontal scales over which the multiangular information about cloud optical depth is spread out (Oreopoulos et al. 2000). Radiative smoothing (Marshak et al. 1995) and cloud shadowing of the solar direct beam also partially decorrelate the optical depth and radiance in both cross-track and along-track directions. Figure 3 illustrates how the oblique MISR viewing directions do not sense the nadir optical depth of small cumulus clouds at the scale of one MISR pixel. The correlation matrices between simulated MISR reflectance and optical path for all pairs of MISR cameras (Fig. 2) show quantitatively that the optical paths sensed by oblique MISR cameras can be quite different from the nadir optical depth.

The retrieval procedure first collocates the simulated MISR reflectances to a height near the middle of the cloud layer (see Fig. 3): 0.65 km for stratocumulus (near the median cloud top height), 1.25 km for RICO cumulus, and 1.80 km for land cumulus. The mean and standard deviation of the MISR reflectances and the true optical depth are calculated for each pixel patch. For simplicity we choose square pixel patches. The mean and standard deviation of the true optical depth and the simulated MISR reflectances for the seven cameras from Cf to Ca for each pixel patch are the cases used in the retrieval simulations. The D cameras are not included in the retrieval simulations because the horizontal radiative scale would then be tan(70°) = 2.8 times the depth of the cloud layer and the along-track spatial resolution is quite poor compared to the other cameras. The pixel patches are chosen to be contiguous and nonoverlapping and to cover or slightly extend beyond one LES scene in each MISR image. Of course, the standard deviations are not used for the single pixel patches. Table 3 lists the number of pixel patches used for each retrieval simulation. Only those pixel patches with a mean optical depth above 0.2 are used for the two cumulus series.

A neural network is used to perform a nonlinear fitting of the functional relationship between the mean and standard deviation of simulated MISR reflectances and the mean and standard deviation of optical depth (TauMean and TauStd). One advantage of a neural network approach is that it can be trained with, and generalize from, a relatively small number of input cases (Bishop 1995). The neural network model is an input layer of units, one hidden layer of neurons, and an output layer of neurons. The output *H _{j}* of the

*N*hidden layer neurons is related to the

_{H}*N*input unit values

_{I}*U*by

_{i}where *f* is the sigmoid function *f* (*z*) = (1 + *e*^{−}* ^{z}*)

^{−1}. The hidden layer values are input to the output layer neurons, producing the neural net output

*S*by

_{k}There are one or two output values (TauMean and TauStd). The last input unit *U*_{NI + 1} and last hidden unit *H*_{NH + 1} are fixed at unity. The *N _{I}* input values of MISR reflectance statistics are normalized to the range [0.1, 0.9] to make the input units

*U*. The weights

_{i}*W*and

_{ij}*Ŵ*

_{jk}connect neurons in one layer with those in the next layer. The weights thus contain the nonlinear fitting information. The weights are adjusted with a conjugate gradient optimization algorithm to minimize the rms error between the normalized output

*S*and the true normalized TauMean and TauStd [

_{k}*S*

^{(T)}

_{k}]. The weights are initialized with uniform random numbers between −1 and +1.

The cases input to the neural network retrieval program are randomly divided into two parts: a training set that is used to adjust the neural network weights and a testing set that is used to prevent “overfitting.” Overfitting occurs when the network output fits too closely to the training cases, that is, in a fit that does not generalize well. The problem of overfitting is ameliorated by reducing the network complexity in choosing the number of hidden units and by “early stopping.” Early stopping is implemented by stopping the conjugate gradient minimization of the training set rms error at the iteration that minimizes the *testing set* rms error.

We take an ensemble approach in performing the neural network retrieval simulations so that the resulting rms retrieval errors are statistically robust. The neural network retrieval program is run 100 times, each time with a different pseudorandom seed that determines the initial network weights and how the input cases are randomly divided into halves for the training and testing sets. The rms retrieval error for TauMean and TauStd is the rms error between the true and neural network predicted values over the training set averaged over the 100 ensemble realizations. The standard error of the mean is calculated to provide an error estimate for the retrieval errors, which turn out to be very small. Since the retrieval method is a type of least squares fitting and the testing and training datasets are from the same statistical population, there is (asymptotically) no bias in the retrievals.

We wish to determine the information content of the multiple viewing directions in MISR data with respect to cloud optical depth. We do this by training and testing neural networks with different input configurations of MISR cameras. The five input configurations are listed in Table 4. The first two use only the nadir camera, first the mean reflectance only and then both the mean and standard deviation of MISR reflectance. The other input configurations increase the number of MISR cameras and use both the mean and standard deviation of reflectance. We believe that using the same statistical technique based on 3D radiative transfer for the nadir-only and multiple-angle retrievals provides a fairer assessment of the multiangular information than would a comparison to a standard 1D radiative transfer–based retrieval. Section 6 does include a comparison, however, with 1D retrievals.

The number of neural network hidden units is increased with the number of cameras, on the theory that the underlying functional relationships become increasing complex. The optical depth retrieval accuracy results do not change (by more than the error bars) for selected retrievals when the number of hidden units is increased by five for each configuration.

## 5. Retrieval simulation results

Cloud optical-depth retrieval results from the MISR simulations are presented as rms errors in TauMean and TauStd (optical depth mean and standard deviation) normalized by the true standard deviation of the retrieved quantity. A retrieval with no skill would have a ratio of rms error to true standard deviation equal to unity. This normalization is important because the amount of variability in TauMean decreases with the pixel patch size. Figures 4 through 8 show the TauMean and TauStd retrieval accuracy as a function of pixel patch, or averaging size, for the five series of cloud types and solar zenith angles (SZAs).

For the stratocumulus scenes (Figs. 4 and 5) there is a substantial decrease in TauMean and TauStd retrieval error from using only the nadir camera to using five or seven MISR cameras. This accuracy improvement is especially large in a fractional sense for the 25° SZA, which has the smallest retrieval error. For SZA = 45° there is a steady decrease in TauMean and TauStd error with the addition of more MISR cameras, whereas for SZA = 25° there is a large decrease for the five-camera configuration but little further improvement for the seven-camera configuration. The retrieval error in TauMean decreases with the averaging size for all MISR camera configurations, except for the nadir view without the reflectance standard deviation, which increases slightly for the seven pixel patch size. The TauStd retrievals have almost no skill if there is no reflectance variability information.

For the cumulus scenes (Figs. 6 to 8) there is a more modest decrease in TauMean and TauStd retrieval error from using multiple MISR cameras. Although the error decrease is small, it is much larger than the uncertainty in the statistical retrieval technique, as indicated by the error bars. The improvement in retrieval accuracy for multiple cameras does not markedly depend on the averaging size, contrary to the expectation in the first paragraph of section 4. As with the stratocumulus scenes, for the higher sun (SZA = 21°) there is little retrieval accuracy improvement between 1 and 3 or between 5 and 7 cameras, but there is a steadier increase in accuracy for lower sun (SZA = 45°). Not using reflectance standard deviation information (the first configuration) results in much poorer retrieval accuracy, even for TauMean, as the pixel patch size is increased.

The previous results were for the MISR cameras collocated (or registered) at a fixed height in the middle of the cloud layer for each cloud type. One could imagine that the retrieval accuracy would be better if the MISR pixels are collocated at the cloud top height. With real MISR data the cloud top height must be obtained from stereo matching of images from different cameras. The operational stereo cloud height algorithm (Moroney et al. 2002) uses the three A cameras and produces cloud heights at a spacing of 1.1 km using a 6 by 10 pixel pattern-matching patch size. Thus, the MISR cloud height algorithm cannot resolve the detailed cloud top profile of small cumulus (Marchand et al. 2007). We avoid this problem here by collocating the cameras for each MISR pixel to the true cloud top height (defined as where the optical depth reaches 0.25) or at the fixed midlayer height for clear pixels. Thus, these retrieval simulation results presumably give the most improvement that would be expected by attempting to register the MISR cameras to cloud top. Figure 9 compares the TauMean retrieval error for seven cameras with cloud top camera collocation to the fixed height collocation. For stratocumulus the cloud top collocation slightly improves the retrieval accuracy for patches larger than a single pixel. For the two cumulus cloud types the retrieval error for cloud top collocation varies between the errors for nadir-only retrievals and the fixed height collocation, but is never significantly better than the fixed height collocation retrievals. The different results for stratocumulus compared to cumulus are probably due to the low horizontal inhomogeneity of the overcast clouds, which results in the oblique reflectances being much more related to optical depth than is the case for small cumulus clouds (see Figs. 2 and 3).

## 6. Discussion

It is commonly believed that the more observations of solar radiation reflected from clouds into different directions are available, the better accuracy the retrieved cloud properties will have. However, this is not obvious for highly three-dimensional cloud fields. While nadir view observations are most correlated with (nadir) optical depth the oblique views, radiances are more correlated with the optical path oriented along the viewing direction (see Fig. 2). Can we do better with multiple viewing angles compared to nadir-only reflectance? MISR multiangular measurements of solar radiation provide an excellent framework to study this question. To understand and correctly interpret MISR observations in terms of cloud optical depth, we simulated MISR measurements for a wide variety of different cloud fields from broken marine trade cumulus and fair-weather cumulus to overcast stratocumulus using LES cloud models (section 2) and performed 3D radiative transfer calculations (section 3). For retrievals we used a neural network technique (section 4). (We have also tested a Bayesian inversion (Evans et al. 2002) but, for the limited size of the input dataset, we found the neural network approach more accurate.) Although powerful and straightforward, this statistical retrieval is only a nonlinear fit and does not provide any physical explanations of the retrieval results. Nevertheless, based on Figs. 4 to 8, one can state that the retrieval accuracy improves with

the addition of reflectance standard deviation, especially for cumulus clouds and (obviously) for the retrievals of standard deviations;

multiple viewing angles, though perhaps only marginally for cumulus;

increasing the averaging size from a single pixel to 3 × 3, in most cases to 5 × 5, pixels.

Below we discuss these statements with a simple analysis of the simulated MISR pixels for stratocumulus clouds.

From the stratocumulus scenes with solar zenith angle of 45° let us select only those pixels that have An reflectance value of 0.5 ± 0.01. (This is around the peak of the distribution of An; the total number of pixels is 1189.) The values of An correspond to pixels with optical depth ranging from 9 to 19. Considering the independent pixel approximation (IPA) for reference, the range 0.49 ≤ An ≤ 0.51 gives optical depths from 11 to 13 depending on cloud droplet size and ocean reflectance. Figure 10 illustrates the probability density functions of these two distributions of cloud optical depths. The much broader range of optical depths for 3D calculations (see the inset) that gives An = 0.5 ± 0.01 is due to cloud horizontal inhomogeneity causing shadowing (thus thicker pixels reflect less) or brightening (thus thinner pixels receive extra illumination and reflect more). How can multiangular information help reduce the range of cloud optical depth? In other words, how can additional viewing angles help identify pixels that are in shadow or receive extra illumination from a neighboring pixel?

Theoretically, one can identify shadowed pixels using an oblique viewing direction if the reflectance in this direction increases monotonically with optical depth (i.e., the pixels are all illuminated as seen from the oblique viewing direction). In this case, pixels with the same nadir reflectance but higher oblique reflectance will be marked as shadowed. Unfortunately, pixels that are shadowed in the nadir direction are most likely also shadowed in oblique viewing directions. The direction straight back toward the sun would be the best to avoid shadows, but the relative azimuth viewing angles for our simulated scenes (consistent with MISR viewing geometry) are 40°–60° (see Table 2). As a result, for a single pixel retrieval we can get only marginal improvement in retrieval accuracy with more viewing angles, as illustrated by the small decrease in width of the distribution in Fig. 10, for which pixels having extreme Aa camera reflectances are removed.

Let us now average to the 3 × 3 pixel patches and again select those patches that have An = 0.5 ± 0.01. The black curve in Fig. 11 illustrates this case and shows a much narrower distribution. Next, we analyze the standard deviation of the 3 × 3 An reflectance values and find that larger standard deviation corresponds to a cloud structure with a higher chance of having both illuminated and shadowed areas. In contrast, the patches with lower standard deviations in An are closer to horizontally homogeneous ones. If we analyze only those patches with the An standard deviation lower than the mean standard deviation, the distribution is somewhat narrower (see green curve in Fig. 11). This partially explains why neural network retrievals with added standard deviation have smaller rms error (this is especially true for more variable cumulus clouds; see Figs. 6 –8), which is consistent with Cornet et al. (2004).

Note that it is hard to predict if the effect of adding standard deviations to the average values is more important than adding multiple angles. The neural network retrieval technique allows us to separate these two effects numerically. The results (Figs. 4 and 5) show that, at least for stratocumulus clouds and 3 × 3 pixel patches, additional cameras may improve retrievals more than adding standard deviation to the nadir-only camera. Obviously, for the retrievals of standard deviations (lower panels in Figs. 4 to 8) it is absolutely crucial to add standard deviations to the average values for each patch.

Averaging to approximately 1 km × 1 km (3 to 5 MISR pixels) helps the retrievals for most combinations of the cameras. However, further averaging does not necessarily improve the retrievals (see Figs. 6 –8): for larger scales standard deviations become less informative, photon horizontal fluxes become much weaker, and the plane-parallel (PP) bias (Cahalan et al. 1994) dominates and causes the increase of uncertainties with larger patches. It is worthwhile to notice that the optimal scale of 1 km is consistent with Fig. 13b from Davis et al. (1997) obtained for stratocumulus clouds.

Which oblique cameras best complement the nadir one? Figure 12 illustrates the retrievals for the nadir-only camera and the nadir plus each of the oblique cameras. While for the stratocumulus scenes we see slightly better retrievals for the aft cameras, for the cumulus scenes adding forward cameras is more efficient. This is especially true (7% difference) between the 60° viewing angle cameras (Cf and Ca) for the RICO clouds. The explanation may be that the RICO clouds are the thinnest ones and extra illumination from the neighboring pixels at the low forward direction may compensate for shadowing that is seen from the low aft direction. In any case, the difference is not significant enough to make a strong statement of the predominant direction. Note that in Fig. 12 we also added (as empty symbols) the values of retrievals with mean An reflectance values only. The improvements between adding standard deviation and one extra oblique camera are comparable.

It is also of interest to see which cameras are the most efficient for a single camera retrieval of cloud optical depth. Looking at the linear correlation matrices plotted for the RICO clouds in Fig. 2, we see that, obviously, the nadir camera is the best for (vertical) optical depth with correlation coefficient 0.91. The efficiency of other cameras naturally decreases with the increase of viewing angle. For the correlation matrix with the cloud top collocated MISR cameras the correlation is lower for the forward oblique cameras than for the aft cameras. Note, however, that correlation coefficients characterize the scatter around a linear relationship rather than a bias.

So far we have focused only on statistical retrievals comparing the accuracy between multiangle and nadir-only retrievals using the same neural net statistical algorithm. Although the statistical retrievals (trained on subsets of the same dataset) are unbiased, physical 1D retrievals are generally biased. It is interesting to compare the previous 3D radiative-transfer-based retrievals with 1D retrievals. Standard plane-parallel cloud optical depth retrievals assume that the pixels (at whatever scale) are uniform and often use a lookup table retrieval method. This is emulated here by computing 1D radiative transfer on the LES cloud field columns with SHDOM in IPA mode (which allows no radiative interaction between columns), and fitting a function [*τ* = *a*(*R* − *R*_{clr})/(1 + *bR* + *cR*^{2})] that relates optical depth *τ* to the nadir reflectance *R* and *τ* = 0 reflectance (*R*_{clr}). This function is then used to perform plane-parallel retrievals on the patch average reflectance of MISR pixels simulated with 3D radiative transfer. There is another approach to 1D radiative transfer retrievals, however, which is to use the statistical cloud variability within a MISR pixel (and over a pixel patch) that is contained in the higher resolution LES fields. This independent pixel statistical (IPstat) retrieval is performed with the same procedure as the 3D neural network retrievals, except that SHDOM IPA (1D) radiances are used instead of the 3D radiances. The neural network is trained with the mean and standard deviation of nadir MISR pixel reflectances from 1D radiative transfer, but evaluated with the mean and standard deviation of MISR pixels simulated with 3D radiative transfer.

The accuracy of these two types of 1D retrievals from nadir reflectances are compared in Fig. 13 with the previously shown 3D neural network retrieval results. For stratocumulus with SZA = 45° the IPstat retrievals are just as accurate as the 3D nadir-only retrievals, and the PP retrievals are only slightly worse for averaging sizes of five pixels and smaller. The IPstat retrievals are slightly less accurate than 3D nadir retrievals for stratocumulus with SZA = 25° (not shown). For the RICO small cumulus clouds there are huge biases and correspondingly large rms errors in optical depth retrievals from the IPstat and PP methods. The retrieval accuracy of PP is substantially worse than IPstat at all pixel patch sizes for these cumulus clouds. A simple explanation for the lack of improvements between IPstat and 3D for stratocumulus clouds is that the IPA bias (of not accounting for the net horizontal fluxes) is negligibly small for SZA = 45° and a bit larger for SZA = 25°. In other words, accounting for 3D radiative transfer in training does not make any difference for SZA = 45°. In contrast, for RICO cumulus clouds, the (negative) bias is large; additional radiation from illuminated neighboring pixels does not compensate for the shortage of radiation caused by shadowing.

Finally, in our statistical retrievals the neural net was trained on a subset of the same dataset. What if we train the neural network with some generic cumulus dataset? Will it be applicable to all cumulus clouds? To begin answering this question we performed retrieval experiments using neural networks trained on land cumulus clouds and used on RICO cumulus, and vice versa. We consider this as a very first step toward applying our retrieval algorithm to real MISR data rather than to simulated data. The results are shown in Table 5. First, there are biases for both average optical depth (TauMean) and standard deviation of optical depth (TauStd) if we use the “wrong” cloud dataset. The bias is positive in retrieving RICO clouds with the land cumulus neural network (and negative for the other way around) because the land cumulus clouds have much higher median optical depth. The increase in rms error when using the wrong cloud dataset is smaller for retrievals with the land cumulus dataset because it includes a wider range of optical depth and cloud thicknesses. The rms retrieval errors are much larger than with the original cloud dataset with all seven MISR cameras. However, they are better than the retrievals with the original cloud dataset and only the nadir camera without standard deviations. We found these results encouraging: the retrievals still show substantial skill and so it seems worthwhile to try the approach with real MISR observations.

## 7. Conclusions

This study uses a retrieval simulation approach to determine the impact of MISR’s multiple viewing directions on optical depth retrieval accuracy in boundary layer clouds. MISR pixel radiances are simulated from hundreds of cloud fields from three series of LES models runs: (i) stratocumulus clouds from ASTEX, FIRE-I, and DYCOMS-II simulations with a model having bin-resolved microphysics, (ii) small marine trade cumulus clouds from simulations based on 18 RICO soundings, and (iii) land surface–forced fair- weather cumulus clouds in three different wind shear environments. Radiances at 0.67-*μ*m wavelength in MISR viewing directions are computed from the LES cloud fields with 3D radiative transfer using SHDOM. Cloud optical properties are calculated with Mie theory for gamma distributions of cloud droplets, assuming a fixed number concentration for the RICO and land cumulus series. Surface bidirectional reflectance is calculated using an ocean reflection model with wind speed varying with the LES cloud field. Radiative transfer is performed for two solar zenith angles for the stratocumulus and land cumulus series and one SZA for the RICO series. The radiances computed with SHDOM are averaged over the MISR instantaneous field of view (which depend on the camera) and sampled at 275-m pixel spacing to simulate MISR reflectances.

The simulated MISR reflectances are much more correlated with the optical path along the viewing direction than with the single pixel (vertical) optical depth. Therefore, instead of focusing on single pixel optical depth, we perform retrievals of the mean and standard deviation of optical depth (TauMean and TauStd) over pixel patches of varying size. The inputs to the retrievals are the mean and standard deviation (over the patches) of MISR reflectances for various sets of cameras, collocated to a height in the middle of the cloud layer appropriate for each of the three cloud types. The retrievals are performed with an ensemble neural network approach. The neural network is trained on a randomly chosen half of the dataset, while the other (testing) half is used to prevent overfitting and to evaluate the retrieval accuracy. With 100 realizations of the training, the mean rms retrieval error is robust (i.e., the random error is small).

The rms retrieval error in TauMean and TauStd is normalized by the standard deviation of each quantity, and thus a ratio less than unity shows some retrieval skill. We summarize the most important results in Table 6 by considering how the normalized retrieval error in TauMean for the 5 × 5 pixel patches decreases from using only the An (nadir) camera to using seven MISR cameras (all but the most oblique D cameras). There is a large fractional decrease in mean optical depth retrieval error for the stratocumulus case with high sun, which already has a small retrieval error. The cumulus cases have only a small decrease (around 14%) in retrieval error of TauMean from using seven MISR cameras instead of only nadir. The decrease in TauStd retrieval error with multiple cameras is similarly modest. In general, the normalized retrieval error of TauMean decreases with pixel patch size from 1 × 1 to 5 × 5, although the standard deviation of TauMean also decreases rapidly for the cumulus cases. The additional input of reflectance standard deviation to the nadir mean reflectance results in a modest decrease in TauMean retrieval error for the stratocumulus cases but a large decrease in error for the cumulus cases. Not surprisingly, the standard deviation of reflectance is crucially important for the retrieval accuracy of TauStd.

These statistical retrievals based on 3D radiative transfer are much more accurate than plane-parallel retrievals for cumulus clouds. For example, the 5 × 5 pixel patch normalized TauMean error for RICO clouds is 0.362 for the nadir-only 3D retrieval and 0.785 for the nadir plane-parallel retrieval. The statistical retrievals are “tuned” for each cloud type, however. When the neural network trained for the land cumulus case for seven MISR cameras is used on the RICO clouds, the normalized TauMean error for 7 cameras increases from 0.315 to 0.411, which is still much lower than for the plane-parallel retrievals.

Although a variety of high-resolution simulated cloud structures and 3D radiative transfer was the basis for this study, there are a number of limitations. The simulations determined the optical depth retrieval errors when the statistics of 3D cloud structure are known because the same datasets were mostly used for both the training and testing of the neural network retrievals. The three sets of LES cloud fields are likely similar to some real cloud fields on the scale of the pixel patches, but represent only a small fraction of boundary layer cloud structures found in nature. In particular, the cumulus cloud fields contained only small cloud elements with diameters under 2 km. Fixed LES cloud scenes were used for the simulation of all MISR cameras while there would be significant temporal evolution in the cloud scenes over the 5 min between the Cf and Ca cameras. The simulated atmospheres did not include aerosol, which could affect cloud optical depth retrieval accuracy in some situations. Although the ocean surface reflectance model is thought to be accurate, obviously there are many other surfaces on earth that would cause higher retrieval errors. Finally, radiometric errors of the MISR instrument were not considered. Unfortunately, eliminating these limitations would likely decrease the already small reduction in retrieval accuracy from using multiple MISR directions. On the other hand, it is possible that the whole concept of using the mean and standard deviation of reflectances over pixel patches is a poor method for introducing multiangular information into optical depth retrievals. It is possible that a radically different approach would provide much better retrieval accuracy from multiangular radiances.

Although much work would have to be done to implement an operational algorithm for retrieving cloud optical depth from MISR multiangular data, the approach of calculating 3D radiative transfer from LES cloud fields could be used. There is a precedent in the Tropical Rainfall Measuring Mission for an operational statistical inversion method with a retrieval database derived from numerical cloud model fields (Kummerow et al. 1996). A reasonable first step toward developing an operational MISR cloud optical depth retrieval algorithm would be to consider only boundary layer clouds over the ocean, as was done in this study. Deep convective clouds have a much larger scale, both vertically and horizontally, which would require using cloud-resolving models and much more expensive Monte Carlo models to simulate MISR reflectances with 3D radiative transfer. Considering all clouds would also require dealing with multilayer clouds and cirrus clouds, with corresponding uncertainty in ice crystal phase functions. The existing stereo cloud height retrieval could easily be used to screen out clouds with top heights above the boundary layer.

The oceanic boundary layer cloud retrieval database would have to include many more LES model cloud simulations than those used here. Realistic variability in cloud microphysics, cloud temporal evolution, oceanic reflectance, and aerosol optical properties should be included in the radiative transfer simulations so that accurate uncertainties could be estimated. The database could include mean and standard deviation of cloud optical depth and MISR reflectances over roughly 2-km patches. The database could have all boundary-layer cloud types mixed together so that a classification step would not be needed. Instead, the statistical retrieval would use inputs of reflectance mean and standard deviation and, perhaps, cloud top height to implicitly choose the correct region of cloud retrieval space. The fixed viewing angles and sun-synchronous orbit of MISR reduce the solar-viewing geometry space considerably. The solar zenith and relative azimuth angles vary, however, with latitude and season, and thus this information would need to be included in the retrieval algorithm. A comprehensive retrieval database could require perhaps a hundred cloud simulations and thousands of 3D radiative transfer calculations, but this is feasible with modern computing power. Taking such an approach would finally bring operational cloud optical property retrievals into alignment with our understanding of cloud physics and radiative transfer.

## Acknowledgments

We are very grateful to Andrew Ackerman for providing the stratocumulus fields in a form tailored for us and for providing comments on the manuscript. We are grateful to Bjorn Stevens for providing his UCLA LES code. We thank Robert Pincus for providing the land cumulus LES scenes. We also thank Bob Cahalan for his support and encouragement. This research was supported by the NASA Radiation Sciences Program under Grants 621-30-86 and 622-42-57.

## REFERENCES

**,**

**,**

**,**

**,**

**.**

**.**

**,**

**,**

**,**

**.**

**,**

**,**

**.**

**,**

**,**

**,**

**,**

**,**

**.**

**.**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**,**

**.**

**.**

**.**

**.**

**,**

**.**

## Footnotes

*Corresponding author address:* K. Franklin Evans, University of Colorado, 311 UCB, Boulder, CO 80309-0311. Email: evans@nit.colorado.edu