## Abstract

The next generation of advanced high-resolution sensors in geostationary orbit will gather detailed information for studying the Earth system. There is an increasing desire to perform observing system simulation experiments (OSSEs) for new sensors during the development phase of the mission in order to better leverage information content from the new and existing sensors. Forward radiative transfer calculations that simulate the observing characteristics of a new instrument are the first step to an OSSE, and they are computationally intensive. The scalar approximation to the radiative transfer equation, a simplification of the vector representation, can save considerable computational cost, but produces errors in top of the atmosphere (TOA) radiance as large as 10% due to neglecting polarization effects. This article presents an artificial neural network technique to correct scalar TOA radiance over both land and ocean surfaces to within 1% of vector-calculated radiance. A neural network was trained on a database of scalar–vector TOA radiance differences at a large range of solar and viewing angles for several thousand realistic atmospheric vertical profiles that were sampled from a high-resolution (7 km) global atmospheric transport model. The profiles include Rayleigh scattering and aerosol scattering and absorption. Training and validation of the neural network was demonstrated for two wavelengths in the ultraviolet–visible (UV-Vis) spectral range (354 and 670 nm). The significant computational savings accrued from using a scalar approximation plus neural network correction approach to simulating TOA radiance will make feasible hyperspectral forward simulations of high-resolution sensors on geostationary satellites, such as Tropospheric Emissions: Monitoring of Pollution (TEMPO), GOES-R, Geostationary Environmental Monitoring Spectrometer (GEMS), and Sentinel-4.

## 1. Introduction

Radiative transfer models (RTMs) are essential tools in a wide range of applications that help us understand atmospheric processes and atmosphere–surface interactions. RTMs are used to 1) design atmospheric indices, 2) perform sensitivity analyses, 3) develop inversion algorithms to retrieve atmospheric composition and surface properties from remotely sensed data, and 4) generate artificial scenes as would be observed by an optical sensor. In the latter case, RTMs are used in instrument simulators that function as a virtual laboratory during the development of remote sensing missions. This virtual laboratory is often referred to as an observing system simulation experiment (OSSE).

Advanced detailed RTMs are computationally intensive, and the next generation of remote sensing instruments will have increasingly high spatial, temporal, and spectral resolutions. This can make using detailed RTMs in instrument simulators unfeasible in a practical sense. To overcome this, often the scalar approximation is used in radiative transfer calculations, which treats the light beam as a scalar, rather than a vector consisting of the four Stokes parameters which describe the state of polarization of the light beam.

Treating light as a scalar can lead to significant error in the calculation of top of the atmosphere (TOA) radiance, because the contributions to the total intensity of the second Stokes parameter *Q* and the element $P12$ of the scattering matrix are neglected (Mishchenko et al. 1994). When scalar radiative transfer (RT) is used to generate lookup tables for aerosol optical depth (AOD) retrievals, this leads to substantial (greater than 0.1) errors in retrieved AOD (Levy et al. 2004).

For light scattered by Rayleigh or small spherical aerosol particles, the contribution of *Q* and $P12$ to the calculation of intensity are maximum for low-order (but not first order) light scattering paths involving right scattering angles and right angles of rotations of the scattering plane. Mishchenko et al. (1994) showed that because Rayleigh scattering is nearly isotropic, for the Rayleigh slab problem the contributions of such light paths can be significant, and differences between the scalar and vector calculations of TOA radiance (i.e., the scalar error) can be as large as 10%. Meanwhile, aerosols scatter strongly in the forward direction, and Hansen (1971) showed that the scalar errors for the aerosol slab problem are small. Thus over land, where polarization by surface reflectance is weak (Herman et al. 1997), the main factor that modulates the scalar error is atmospheric scattering. For realistic atmospheric profiles consisting of both aerosol and Rayleigh particles, the scalar error will depend on the aerosol and Rayleigh optical depths, aerosol optical properties (scattering and absorption efficiency and phase function), and scattering geometry. However, over the ocean, where sea surface glint can be highly polarized, the scalar error will also depend on ocean surface glitter.

Because aerosol optical properties, the Rayleigh extinction cross section, and ocean surface reflectance are wavelength dependent, the scalar error is likewise wavelength dependent with the largest errors occurring in the ultraviolet–visible (UV-Vis) spectral range, where Rayleigh optical depth and ocean surface reflectance are significant. This part of the light spectrum is used to retrieve concentrations of air quality and climate-relevant trace gases as well as aerosol optical properties. In this paper we present a method for correcting TOA radiances calculated with the scalar approximation, and we will focus on forward RT calculations in the UV-Vis, where the scalar error is significant.

The approach utilizes a machine learning algorithm, specifically, an artificial neural network. The neural network is used as a data transformer that maps RTM input parameters to the difference between a scalar- and vector-calculated TOA radiance. Neural networks provide flexibility and excellent accuracy in representing nonlinear processes. It has been shown that a feed-forward neural network can be trained to approximate virtually any smooth function (Hornik et al. 1989), and thus are universal approximators. Unlike other statistical approaches, neural networks do not rely on an assumed underlying model or probability distribution for the data. Neural networks have been used effectively in many remote sensing applications including, but not limited to, creating spectra from RTM input parameters (Rivera et al. 2015), retrieving AOD from surface solar radiation measurements (Huttunen et al. 2016), correcting the bias between MODIS and AERONET AOD (Lary et al. 2009), and calculating airmass factors for ozone total column retrievals (Loyola 2006). The neural network developed in this work will be used in a fast instrument simulator that will produce TOA radiance data for OSSE studies for the next generation of remote sensing instruments in geostationary orbit [e.g., Tropospheric Emissions: Monitoring of Pollution (TEMPO; Zoogman et al. 2017), GOES-R (Schmit et al. 2017), Geostationary Environmental Monitoring Spectrometer (GEMS; Choi et al. 2018), and Sentinel-4 (Gulde et al. 2017)].

In the following, sections 2–4 describe the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model used in this work, as well as the general methodology of training the neural network with datasets developed from a global high-resolution atmospheric simulation using the Goddard Earth Observing System Model, version 5 (GEOS-5). Section 5 investigates the effect of different combinations of input parameters on the performance of the neural network, as well as validation. This is followed by a summary and conclusions in section 6.

## 2. Description of VLIDORT

VLIDORT is a linearized pseudospherical vector RTM that utilizes the discrete ordinate method to solve the radiative transfer equation for a multilayer atmosphere (Spurr 2006). The model outputs the Stokes vector for arbitrary viewing geometry, surface reflectance, and optical depth. The optical property inputs for VLIDORT are the layer extinction optical depths $\Delta l$, layer total single scattering albedos *ω*_{l}, and layer scattering-matrix expansion coefficients *B*_{l}, and the surface reflectance matrix ** ρ**. For layer Rayleigh scattering optical depth $\delta Ray,l$, molecular absorption optical depth $\alpha gas,l$, aerosol extinction optical depth $\tau aer,l$, and aerosol single scattering albedo $\omega aer,l$, the total atmospheric optical inputs are

VLIDORT was run with six discrete ordinate streams, delta-M scaling, and an exact single scatter correction, the so-called Nakajima–Tanaka procedure, which replaces the truncated single scatter contribution with an exact solution that uses the complete phase function (Spurr 2008). An additional correction was applied to the single scatter term to account for Earth’s curvature effect on the incoming solar beam and the outgoing line of sight (referred to as the “outgoing” sphericity correction in VLIDORT).

## 3. Description of the neural network

An artificial neural network is a machine learning algorithm that generates a mapping $(FNN)$ from a set of input variables to one or more output variables. In this work, the neural network aims to map a set of input variables describing the satellite scene $(\Theta )$ to the scalar radiative transfer error $(\epsilon )$, which is defined as the difference between the vector-calculated TOA radiance $(Rvector)$ and the scalar-calculated TOA radiance $(Rscalar)$ [Eq. (4)]:

The structure of a neural network is a set of neurons or nodes connected by numerical weights. Each node is a nonlinear transfer function that is applied to the weighted inputs from other nodes, whose result is passed on to the next layer of nodes. The weights are tuned based on a training dataset containing input–output data pairs. In this paper, we used a feed-forward back-propagation neural network consisting of one hidden layer with 20 nodes. The output layer had one node for the scalar error $(\epsilon )$ and the input layer had several nodes, one for each of the RTM inputs $(\Theta )$. We solved for the weights using the truncated Newton constrained (TNC) algorithm provided by the Python ffnet module (version 0.8.3; http://ffnet.sourceforge.net), which supports multiprocessor calculations (Wojciechowski 2011, 2012, 2005).

The set of input nodes was determined by testing different combinations of the essential input data to the RTM that are associated with the scalar error. The following input parameters were considered: 1) solar zenith angle (SZA), 2) viewing zenith angle (VZA), 3) relative azimuth angle (RAA), 4) scattering angle $[cos\u2061(scattering angle)=cosSZA cosVZA+sinSZA sinVZA cosRAA]$, 5) geometric airmass factor $(AMFg=secSZA+secVZA)$, 6) Rayleigh optical depth (ROD), 7) absorbing AOD (AAOD), 8) scattering AOD (SAOD), and 9) 10-m wind speed. The $AMFg$ is a measure of the geometric path of light through the atmosphere from the sun to the sensor; long paths through the atmosphere increase the likelihood of multiple scattering. SAOD and AAOD are defined as the aerosol single scattering albedo multiplied by the AOD, and one minus the aerosol single scattering albedo (or coalbedo) multiplied by the AOD, respectively. These absorption and scattering properties of aerosols were included as possible inputs because they also affect the light path and the degree of linear polarization. The 10-m wind speed characterizes ocean surface roughness, and drives the Cox–Munk ocean glitter kernel used for surface reflectance in our simulations (Mishchenko and Travis 1997).

In principle, the full atmospheric description including multilayered vertical profiles of atmospheric optical properties could be used as neural network inputs. However, this would inflate the number of input nodes. We found during testing that using the optical property vertical profiles as inputs both significantly slowed down training, and led to overfitting of the training data.

Because of the wavelength dependence of the scalar error, we generated separate neural networks for each wavelength considered by training on a wavelength-specific training dataset. This is adequate for aerosol retrieval applications, as they utilize only a few channels. To simulate a hyperspectral observation, we plan to use principal component analysis methods to greatly reduce the number channels that would require forward RT calculations (Natraj et al. 2005). In this paper, we present results for TOA radiances simulated at 354 and 670 nm, two important channels for aerosol retrievals, but we found similar performance of the neural network for other channels in the UV-Vis.

Another approach is to develop a single neural network for a range of wavelengths by training with data generated at multiple wavelengths. We found that this approach is less accurate at each individual wavelength than using a wavelength-specific neural network because the scalar error changes by orders of magnitude within the full spectral range of the UV-Vis. Taking this “mixture of experts” approach of training on each wavelength separately allowed us to split up the scalar error space into smaller homogeneous regions, where the inputs and outputs fall within a smaller range. This approach facilitates training, and helps to avoid falling into local minima.

## 4. Development of training dataset from the G5NR

### a. Description of G5NR

The GEOS-5 nature run (G5NR) is a 2-yr (May 2005–May 2007) global nonhydrostatic mesoscale simulation based on the Ganymed version of GEOS, NASA GMAO’s flagship Earth system model (Putman et al. 2014). The G5NR dataset consists of 3D concentrations of 15 aerosol species (dust, sea salt, sulfates, and organic and black carbon), in addition to O_{3}, CO, and CO_{2}, and a full meteorological description of the atmosphere. These fields are provided every 30 min, with a horizontal resolution of 7 km and 72 vertical layers from the surface up to 0.01 hPa. This high-resolution dataset was generated to support OSSEs for weather forecasting and aerosol applications by providing the basis from which to generate simulated observations globally over a realistic dynamic range.

The G5NR simulation was driven by prescribed sea surface temperature and sea ice, daily sources of volcanic and biomass burning aerosols and trace gases, high-resolution anthropogenic aerosol and trace gas emissions, sinks of aerosols and trace gases, and biogenic sources and sinks of CO_{2}. Aerosol processes in GEOS are derived from the Goddard Chemistry, Aerosol, Radiation, and Transport model (GOCART; Chin et al. 2002), with tracer transport running online coupled to the GEOS radiation code. The GOCART module handles the sources, sinks, and chemistry of dust, sulfate, sea salt, black carbon, and organic carbon aerosols. The 15 aerosol tracers in the model consists of five noninteracting size bins for dust and sea salt, hydrophobic and hydrophilic modes of organic and black carbon, and sulfate aerosol. Details of the physical and optical properties of the 15 aerosol tracers can be found in the online supplemental material.

In GOCART, aerosols are assumed to be external mixtures, and the total layer aerosol optical depth $\tau aer,l$ can be calculated from the sum of individual aerosol species (dust, sea salt, sulfate, black carbon, and organic carbon) optical thicknesses:

where $ms,ld$ is the aerosol dry mass concentration, and $\beta s,l$ is the mass extinction efficiency for species *s* (Chin et al. 2002).

### b. Training dataset for the neural network

To generate the training dataset of input–output pairs from which to tune the neural network weights, we randomly selected from G5NR 436 land profiles and 400 ocean profiles over a region covering North America (20°–50°N latitude, 130°–60°W longitude) at 0000, 0600, 1200, and 1800 UTC on one summer and one winter day: 3488 atmospheric profiles over land and 3200 profiles over ocean in total. The profiles were chosen randomly in an effort to create a training dataset that would encompass a range of atmospheric conditions that would be observed by TEMPO, a UV-visible spectrometer that is planned to be launched into geostationary orbit in 2019. A database of scalar radiative transfer errors $(\epsilon )$, the training outputs, was created from these profiles by taking the difference between the TOA radiance calculated with the full vector radiative transfer equation and the scalar approximation in VLIDORT, using the G5NR atmospheric profiles to generate the optical inputs for the RTM. Figure 1 shows the probability distributions of 354- and 670-nm AOD, aerosol SSA, and the ratio of ROD to AOD in the ocean and land training datasets.

As the trace gases simulated by G5NR are limited, in this work we omit trace gas absorption effects in the optical inputs, effectively simulating trace gas and water vapor corrected TOA radiances, which is the first step in aerosol optical property retrieval algorithms. Thus, any systematic error associated with gas absorption correction is neglected in our simulations.

The scalar error was calculated for each profile under various sun and satellite geometries (Table 1). Over land, we used various Lambertian surface reflectance values from 0.0 to 1.0, and over ocean we used polarized surface reflectance values derived with the GISS Cox–Munk model with wind speeds ranging from 0.1 to 20 m s^{−1} (Table 1).

The number and range of solar angles, viewing angles, Lambertian surface reflectance, and wind speed values were chosen through iteration, with the objective of 1) creating the smallest database (for computational efficiency) of training data that would not result in undertraining the neural network, and 2) including all possible viewing geometries and surface boundary conditions. On a subset of the profiles, we decreased the size of the increments between angles, surface reflectance values, or wind speed until a neural network trained on a larger database yielded the same performance as when trained on the smaller database. SZAs and VZAs were limited to less than ~80°, because in aerosol and trace gas retrieval algorithms only these scenes are typically considered.

The magnitude and nature of the errors inherent in the scalar approximation are illustrated in Fig. 2, which depicts the probability distribution of the scalar errors in the training dataset that is used to develop our neural network correction. Errors are in the range of roughly ±8% at 354 nm over both land and ocean, and are much reduced to ±2% and ±6% at 670 nm over land and ocean, respectively, illustrating how the scalar approximation error strongly decreases with increasing wavelength.

## 5. Training and validation of the neural network

### a. Input parameter selection and training

Different combinations of input parameters were tested to determine the optimal configuration for the neural network. In this application, an effective neural network should produce a correction factor from the input parameters that when added to the scalar-simulated radiance gives an improved estimate of the vector-simulated radiance. The performance of the neural network was determined by training with cross validation using *k*-folding, where the training dataset is split into *k* subsets. The neural network is trained on *k* − 1 sets of data, and tested on the remaining part of the data from which we evaluate the performance of the neural network. The procedure is repeated until all *k* sets have been used for training and testing.

Figure 3 shows the cross-validation results of five *k*-foldings for the neural network trained on 354-nm land data that used combinations of SZA, VZA, RAA, scattering angle, and $AMFg$. The cosines of scattering angle, VZA, SZA, and RAA were used to facilitate normalization of the inputs (handled internally by the ffnet module), which speeds up convergence during training.

To compare the performance of the various neural networks, for the *n* test data points, we calculated the RMSE of the scalar error [Eq. (6)], and compared it to the RMSE of the scalar error after adding the correction predicted by the trained neural network to the scalar radiance:

Figure 3 shows that the RMSE of the scalar error at 354 nm over land can be reduced by more than half by training the neural network solely with the scattering angle, indicative of the strong dependence of the scalar error on scattering angle. Using additional geometric input variables reduces the RMSE further. Including all three angles (or combinations thereof) gives the best performance, reducing the RMSE by ~90%. Similar results were found for the 354-nm ocean data, and the 670-nm land and ocean data (not shown).

As additional geometric input variables are redundant, there is no change in RMSE when using four or more geometric parameters. The low variance in the RMSEs computed for the five *k* folds shown in Fig. 3 indicates that the performance of the neural network does not depend on a particular choice of training and testing subsets.

Figure 4 shows the probability distribution of the scalar error at 354 and 670 nm from one *k*-folding test before and after correcting the scalar radiance with the output from the neural network. The neural network was trained with the cosines of scattering angle, VZA, and SZA. When only these geometrical inputs are used, the neural network reduces the range of the scalar errors to less than ±2% at 354 nm and ±0.25% at 670 nm over land, and ±2% at both 354 and 670 nm over ocean.

In addition to the scene geometry, we tested combinations of ROD, AAOD, and SAOD as input parameters to the neural network. Over ocean, the wind speed was also tested as an input. Figure 5 shows that over land, as expected, the ROD is a significant driver of the 354-nm scalar error, while the addition of SAOD or AAOD alone to the geometric inputs has no effect on the RMSE. However, providing both Rayleigh and aerosol optical depth information to the neural network significantly further reduces the RMSE. The combined information of ROD, AAOD, and SAOD in addition to the geometric inputs provides the best prediction of the scalar error, reducing the RMSE by ~96%. However, only one aerosol predictor, either AAOD or SAOD, is sufficient. Similar results were found for tests on the 670-nm land data (not shown). The testing data show that the neural networks trained with viewing geometry, ROD, AAOD, and SAOD produce scalar errors within ±0.75% at 354 and ±0.06% at 670 nm (Fig. 8).

In contrast, over ocean the scalar error is less sensitive to ROD, and AAOD and SAOD are more significant single predictors (Figs. 6 and 7). Although at 354 nm the addition of ROD as an input still significantly reduces the RMSE (Fig. 6), while at 670 nm the addition of ROD has no effect on the RMSE (Fig. 7), reflecting the differences between the two channels in the magnitude of the ROD and the contribution of Rayleigh scattering. Also, at 670 nm the wind speed is the most significant predictor of the scalar error, while at 354 nm the aerosol terms show the largest decrease in RMSE, indicating the larger impact of atmospheric scattering on the light path in the UV versus the visible spectral range. At 354 nm over ocean, the combination of inputs that provides the best prediction of the scalar error is the wind speed, ROD, AAOD, and SAOD in addition to the geometric inputs, while at 670 nm the ROD can be excluded. The testing data show these neural networks produce scalar errors within ±0.8% at 354 and 670 nm (Fig. 8). Overall, in both the UV and visible over both land and ocean, the neural network scalar errors are smaller than the uncertainty in measured radiances in the UV-Vis from instruments such as the Ozone Monitoring Instrument (OMI; Liu et al. 2010) or MODIS (Esposito et al. 2004).

A final test was made using surface reflectance as an input parameter to the overland neural networks (not shown), and it was found to not be a predictor of the scalar error. This is in contrast to previous studies that have reported that increased surface reflectance significantly decreases the scalar error because brighter surfaces increase atmospheric scattering (Mishchenko et al. 1994; Lacis et al. 1998). In Fig. 9 we illustrate why surface reflectance is not a strong predictor for the scalar error in the neural network. The figure compares the TOA radiance, scalar error, and relative scalar error for a profile with ROD of 0.45 and AOD of 0.01 computed with SZA equal to 35°, and surface reflectance set to both 0 and 1.

Figures 9c and 9d show that the variability in the scalar error with respect to viewing geometry for the two surface reflectance values is almost identical. Figure 9g shows the difference in the scalar error for the two surface reflectance values. It is apparent that the effect of changing the surface reflectance is relatively small, two orders of magnitude less than the scalar error itself. Figures 9e and 9f show that the *relative* scalar error significantly decreases with an increase in surface reflectance, which is what previous studies have reported. This is driven by the increase in TOA radiance with increase in surface reflectance, not the vector–scalar difference.

### b. Validation of the neural network

Based on the results of the training and testing iterations, we used 6–7 input nodes for the final neural networks that will predict the scalar error correction factors. Over land, the inputs were 1) cos(scattering angle), 2) cos(SZA), 3) cos(VZA), 4) ROD, 5) AAOD, and 6) SAOD. Over ocean at 354 nm, wind speed was added as the seventh input. Over ocean at 670 nm, wind speed was also used, but ROD was excluded for six total inputs. A validation of the final neural network is shown in Fig. 10.

Scalar and vector TOA radiances were generated from G5NR data for an arbitrarily selected day in the spring, 11 April 2007. The model fields were regridded onto the TEMPO field of view, and the TOA radiances were calculated using the solar and satellite positions for that day. For the 354-nm calculation, the land surface reflectance values came from a 0.25° climatology of Lambertian equivalent reflectance (LER) databased on TOMS observations (H. Jethva 2015, personal communication). At 670 nm, a MODIS bidirectional reflectance distribution function (BRDF) retrieval product was used. Namely, the Multiangle Implementation of Atmospheric Correction (MAIAC) product (Lyapustin et al. 2011b,a), which provides 8-day Ross-Thick Li-Sparse (RTLS) model parameters at 1-km resolution. For both wavelengths, surface reflectance from the GISS Cox–Munk model was used for pixels containing water. A full day of the TEMPO swath was simulated, and the neural network corrections were predicted with the input parameters for this day.

The correction factors were computed using the final neural network trained on the entire training dataset from section 4b (i.e., no training data were left out for testing). The correction factors were added to the scalar TOA radiances and compared to the vector radiances. This exercise tests whether the neural network is robust enough to derive corrections for realistic data not included in the testing dataset. Figure 8 shows that the scalar approximation with the neural network correction was able to simulate TOA radiances at 354 nm to within 1% error over land and 2% error over ocean, while at 670 nm the errors are within 0.1% over land and 0.8% over ocean. The neural network calculation adds negligible computation time to the simulation, while the scalar calculation is a factor of ~6 faster than the vector calculation.

## 6. Conclusions

It is well known that the scalar approximation in radiative transfer can lead to significant errors in forward calculations of TOA radiance, particularly in the UV-Vis. OSSEs for new high-resolution instruments, such as the geostationary constellation made up of TEMPO, GEMS, and Sentinel-4, are useful for demonstrating the impact of proposed new missions. The radiative transfer calculations required for these OSSEs are computationally intensive because of the high spatial, temporal, and spectral resolutions of the new instruments. Many forward RT calculations are required to simulate the hyperspectral observations. Established principal component analysis (PCA) methods (Natraj et al. 2005) greatly reduce the number of forward RT calculations needed to simulate a spectra by taking advantage of the similarities in optical properties within a small spectral interval. However, the spatial and temporal resolutions of the new geostationary missions mitigate the computational gains of using the PCA method. Thus to make these experiments feasible we have developed a statistical technique using a neural network that corrects the fast scalar RT approximation over both land and ocean surfaces.

We have demonstrated that with just a few basic input parameters, a neural network can represent the complex nonlinear relationships between the errors in the scalar approximation and solar-sensor geometry, surface reflectance, and atmospheric composition. Our validation results indicate that the neural network is able to correct the scalar radiance calculation to within 1% at 354 nm and 0.1% at 670 nm of the full vector radiance calculation over land and to within 2% at 354 nm and 0.8% at 670 nm over ocean.

There are other factors that can affect polarization and therefore the scalar error, such as aerosol vertical distribution, or aerosol size that were not utilized as inputs in the neural network. We attempted to include aerosol size information in our training by including speciated AOD as inputs to our neural network, but the improvements in the validation were too small to be significant (not shown). Meanwhile, including optical property vertical profiles as inputs led to overfitting of the training data. Future developments of the neural network should utilize more sophisticated deep learning techniques in order to fully resolve all of the relevant physical factors associated with neglecting polarization in the scalar RT calculation.

There are several limitations to our neural network correction algorithm. First, as aerosol properties are not input parameters to the network, the algorithm is limited to correcting forward scalar RT calculations for mixtures of the seven aerosol types described in our model. For other applications, such as for a retrieval, one would have to develop a training dataset that uses the aerosol types (i.e., aerosol size and composition) assumed by that application. Second, the neural network technique is only strictly valid for inputs within the range of the testing dataset. Although we have considered a wide range of solar-sensor geometries and atmospheric composition, there will remain atmospheric conditions, such as very polluted conditions, or large biomass burning and dust events that fall outside the range of our testing dataset. This can be remedied by expanding the testing dataset, at the expense of computational training time, or creating specific neural networks from datasets of these types of events. The computational savings from using a scalar approximation with a neural network approach to radiative transfer in the UV-Vis will allow us to conduct comprehensive OSSEs that can include, for example, synergies between different observing strategies of several instruments.

## Acknowledgments

This investigation has been funded by the NASA GEO-CAPE preformulation study.

## REFERENCES

*Light Scattering Reviews*, Vol. 3, Springer, 229–275.

*Artificial Intelligence and Soft Computing*, Springer, 187–195.

## Footnotes

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JTECH-D-18-0003.s1.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).