1. Introduction
Radiative transfer modeling is crucial in determining the radiation budget for weather/climate prediction models and for remote sensing of the earth and atmosphere. Demands for accurate radiation modeling have accompanied recent advances in finescale atmospheric modeling to resolve clouds, and in satellite measurements by sensors with improved spatial and spectral resolution. Monte Carlo photon transport algorithms are often employed to simulate realistic radiation processes, such as three-dimensional (3D) radiative transfer (e.g., Barker et al. 1998; O’Hirok and Gautier 1998; Macke et al. 1999). Numerous studies have revealed that 3D radiative transport is particularly important when examining radiation processes at a cloud-resolving scale (Várnai and Davies 1999; Fu et al. 2000). Adequate treatments of 3D radiative effects are important for pixel-scale satellite retrieval of cloud properties such as optical thickness and effective droplet radius; radiative effects can also impact global-scale climatology (Iwabuchi and Hayasaka 2003; Cornet et al. 2005). The Monte Carlo model is useful for calculating path-length statistics when retrieving amounts of gaseous species using differential optical absorption spectroscopy (DOAS) techniques (e.g., Hönninger et al. 2004).
Several tests and a physically correct basis have made the accuracy of the Monte Carlo model widely recognized. Thus, this model has frequently been used to validate other models (e.g., Evans 1998). Although the Monte Carlo method is associated with random noise, in recent decades, simulation accuracy has been improved by increasing computational power, a trend that will likely continue in the future. Nevertheless, practical applications of the Monte Carlo model often encounter difficulties with current computational power, especially when calculating radiances that are essential for inverse modeling of radiative transfer.
A common question that arises regarding the Monte Carlo model relates to the local estimation method used for radiance calculations. The method requires computationally intensive ray tracing at each collision. The use of realistic, strongly peaked phase functions of Mie scattering by cloud and aerosol particles poses another difficulty. Phase functions are used to sample radiance contributions from each scattering event. Computed radiance can be contaminated by significant noise because sharp peaks are often infrequently sampled. This poor sampling problem has been well addressed by Barker et al. (2003), who proposed an approximation that uses the Henyey–Greenstein phase function to multiply scattered light. The approximation can reduce the noise caused by strong spikes in the original phase function, although biases due to the approximation are not negligible, especially for optically thin cases. Barker et al. (2003) also proposed another method, which truncates the spiky radiance contribution and then redistributes the excess to the whole radiance image. The technique seems to work well when incident photon packets are not very large (∼5 × 104 per unit area). However, it is questionable how much the efficiency can be improved and whether the technique of redistribution to the whole domain is generally adequate regardless of the incident number of photons and domain sizes. Furthermore, the issue of radiance noise for an optically thin atmosphere must still be addressed. Studies of variance (noise) reduction techniques have been rare in atmospheric physics. It is thus meaningful to discuss strategies to improve numerical efficiency.
The purpose of this study was to address problems specific to Monte Carlo atmospheric radiative transfer models and to develop efficient algorithms for variance reduction. Nominal target accuracy in this study was about 1% or less for radiance at the unit area scale (pixel scale). Recent optical instruments for remote radiance measurements are highly accurate, and thus theoretical simulations should be comparable or more accurate than observations. This paper mainly discusses calculations of solar radiance, although the methods and techniques presented are also applicable to calculations of fluxes and heating rates. The paper is organized as follows: section 2 outlines some basic methods implemented in standard Monte Carlo models. In section 3, we propose several techniques for variance reduction. Section 4 demonstrates the performance of the proposed techniques by numerical experiments. Finally, a summary and conclusions are presented in section 5.
2. Monte Carlo radiative transfer model
Several model variants of the classic Monte Carlo algorithms for radiative transfer modeling have been developed (e.g., Marchuk et al. 1980; Evans and Marshak 2005). A review of the many possible algorithms is beyond the scope of this paper. Therefore, a model that employs several standard methods is described in this section. During this study, a parallelized Monte Carlo model was developed for multiple purposes. The model was based on the forward-propagating photon-transport algorithm; the model was designed to trace the trajectories of photon packets from radiation sources (solar light, thermal emissions, artificial lamps, laser beams, or a mixture thereof) to termination due to absorption or escape from the top of the atmosphere. Note that some of the methods and techniques used are similar to those in backward-propagating models. The Cartesian coordinate system and a cyclic boundary condition are employed. Figure 1 shows examples of model output: camera image–like, hemispherical plots of radiance viewed under cloudy sky conditions from a point on the surface. The cloud fields were taken from large-eddy simulations. The radiation model simulated realistic 3D radiative effects, such as shadowing and cloud side illumination, even for complex geometry. Basic components of the model are described below.
a. Fundamentals of the model
b. The local estimation method
3. Variance reduction techniques
a. Modification of the local estimate
The local estimation method requires numerically intensive ray tracing from each collision point to the sensor being considered for the optical thickness [τ(rn, rυ)] integration [see (10)]. Barker et al. (2003) discussed treatments of small ζn, which could be from small Ψn and/or large τ(rn, rυ), and suggested that for solar radiance reflected from clouds the population of small ζn was large, while the contribution to total radiance was small. The use of ray-tracing computing time for such a small contribution would be inefficient. Barker et al. (2003) have proposed a method that excludes the contribution of small ζn simply by terminating ray tracing for small ζn. They found that computing time could be reduced by about 10% with negligible total bias if the cutoff threshold was 0.001 in a limited number of cases with clouds. The method is clearly biased if the threshold is too high, so the threshold cannot be set a priori.
A better acceleration method without bias could use Monte Carlo techniques to sample ζn. By this method, sampled ζn is modified to null or a value that is larger than a prescribed threshold ζmin using random numbers. The modification is based on concepts similar to those of the Russian roulette method and the random sampling of optical thickness as in (2). Two cases are considered here.
Using the above modifications, the cost of ray tracing is reduced when the set ζmin is large. It is easy to show that this method is perfectly unbiased (energy conservative) for any ζmin. As a result, we can set a large ζmin and reduce the time required for computation. However, if ζmin is extremely large (≫1), sampling of ζ′n will be too rare, increasing the variance of ζ′n. Consequently, the optimal value for ζmin should be evaluated by tests.
b. Dual-end truncation approximation for sharply peaked phase functions
Another difficulty in radiance computation relates to large ζ, which often occurs when there are sharp diffraction peaks in the Mie phase function, as mentioned in section 1. Peaks in the forward direction can be larger than the minimum phase function by a factor of a million or more for large water droplets at short wavelengths. The radiance contribution ψn from a scattering event can be very noisy due to infrequent sampling of the peaks. This problem can be partly resolved if the peaks are truncated and the phase function is transformed to a smoother function (i.e., truncation approximation).
c. Collision-forcing method for optically thin media
The algorithm flow of the Monte Carlo model should be slightly modified by introducing the collision-forcing method. At each collision, the weight of a photon packet is multiplied by
d. Numerical diffusion
In the usual Monte Carlo model, each sampling (integration) of radiative quantity is performed at a localized point. Sampling conducted for the region (area or volume) around the point could have a denoising (smoothing) effect on the spatial distribution of the computed quantity. Thus, an artificial numerical diffusion can be used to reduce the noise caused by an insufficient number of photon packets. A simple method is introduced here. At each integration process, the radiative energy is virtually distributed in the horizontal, rectangular area around the site of the photon packet, and energy fractions are integrated in respective subareas, while the actual trajectories of photon packets are the same as those without the diffusion. Since the numerical diffusion is limited to horizontal directions, the horizontal domain averages of radiative quantities are unaltered.
The method presented here is an approximation. There is no reason to expect that we can derive a unique, universal method for numerical diffusion. More complicated methods can be employed, such an accounting for exponential profiles in all directions to redistribute the radiative energy. However, such methods may increase the computational burden, as there is a tradeoff between accuracy and computer time. Thus, a rather simple, rapid computation scheme is employed in the present study. It should be noted that systematic errors might be large if the coefficients in (35) (hereafter referred to as the diffusion coefficients) are too large, although noise would decrease. Optimal values for the coefficients are theoretically unknown and should be determined from numerical experiments.
4. Evaluations
a. Description of numerical experiments
It would be difficult to find the best combination of the numerous tuning parameters that are used in variance reduction techniques. The purpose of this section is to clarify the effectiveness and the characteristics of individual techniques. Table 2 summarizes an example (scheme V) that used combined variance reduction techniques. The parameters were tuned by preliminary tests. For convenience, we will refer to the modified local estimation as MLE, the dual-end truncation approximation as DTA, the collision forcing as CF, and the numerical diffusion as ND. To evaluate the performance of the various methods, CPU times were measured on a single-CPU personal computer. The accuracy (error) of the computed quantities was estimated with benchmarks that used a larger number of photon packets. The benchmarks were obtained from parallel computations on a large-scale scalar computer, not using biasing techniques (e.g., DTA and ND). For this paper, computed radiances were normalized as πI/(F0cos θ0), where I is radiance, F0 is extraterrestrial solar irradiance, and θ0 is the solar zenith angle.
b. Results
1) Biases due to the truncation approximation
First, mean bias errors due to DTA were checked, because DTA is the only technique, presented in this paper that can introduce the domain-average bias error. Visible radiances were computed for various Fmax values [see (26)] for single-layer, plane-parallel cloud cases with optical thicknesses of 1, 5, and 25. Figure 4 shows the bias errors of the radiances computed by DTA, with χmax = 0.9 and χmin = 0.4, for various cloud optical thicknesses (τ) and view zenith angles. A large number of photon packets were used for this experiment (e.g., 2 × 109 for τ = 1) so that the Monte Carlo noise was almost negligible (∼0.1% or less). When Fmax was large, the delta fraction was large [see (26)]. Therefore, larger bias would be expected for larger Fmax. The results show that the bias errors were quite small (<0.3%) when Fmax ≤ 0.8, except for the solar aureole region (within 10° of the solar direction) and the opposite reflection directions where τ < 25. Relatively accurate radiances were obtained in the optically thin case (τ ∼ 1) because the delta fraction was null for the direct-beam and small for low-order scattering. For optically thick cases (τ ⩾ 25), the truncation approximation worked well with very small biases even for the solar aureole. In general, if accuracy of 1% is required for radiance computation, the delta fraction seemed too large when Fmax = 1, which introduced relatively large biases in the solar aureole region, and for large-view zenith angles (>70°). Thus, it is recommended to set Fmax ≤ 0.8.
Even when Fmax = 0.8, bias in the solar aureole region was not negligible for a moderate optical thickness of about 5; maximum error was approximately +8% in the solar direction. This large positive bias implies extreme transmission of first- (or low) order scattering light. With first-order scattering, light is strongly scattered in the forward direction, and can transmit more due to the reduction in the extinction coefficient in (17a) for higher-order scattering. In the circumsolar region, negative biases (about −3%) were shown. This result is an artifact of the truncation of the forward part of the phase function. In practice, accurate aureole radiances can be computed by using small Fmax and/or small χmax and χmin, with a relatively small delta fraction. Actually, the solar aureole is bright because of frequent sampling of the forward peaks, although the truncation approximation was proposed to remove noise resulting from rare sampling of the peaks. There is thus no apparent reason to use truncation approximations for solar aureole simulation.
2) Performance comparisons: Case 1
From the viewpoint of numerical efficiency, both the accuracy and CPU times of different schemes should be compared. Figure 5 shows the effects of DTA and MLE on the root-mean-square (rms) relative error, ε (%), CPU time, T, and the efficiency factor, defined as 1/(ε2T). Nadir-reflected visible radiances were tested for inhomogeneous clouds (case 1). The number of incident photon packets used per column was Ncol = 1000 for each simulation. The rms error can be considered to be Monte Carlo noise because the MLE method is unbiased, and DTA added negligible bias (<1%) as compared to the rms error (>8%) in this experiment. It is clear from the figure that DTA with larger Fmax had two effects: both noise and CPU time were reduced. The latter was due to decreased collision frequency by the scaling of extinction coefficients in (17a). When Fmax = 1 with ζmax < 0.1, the rms error was smaller than Fmax = 0 (no DTA) by a factor of about 7, and the numerical efficiency was about 100 times higher than that for Fmax = 0. This significant improvement indicates the very high ability of DTA to reduce Monte Carlo noise for a realistically peaked phase function. In the MLE method, CPU time could also be reduced by using a larger ζmax, as expected. Note that the usual local estimation method (not modified) was almost the same as for ζmax = 10−7. For ζmax > 1, no significant time reduction was shown. However, rms errors were almost constant for ζmax < 1. This may not be a straightforward result because the use of a larger ζmax reduces the sampling frequency, so increased variance might be expected. Two explanations are suggested: the contribution of a small ζ to total radiance is not very large (Barker et al. 2003), and by MLE, the ζ sampled was always larger than ζmax [see (13) and (14)] so as to cause a partial reduction in variance. When ζmax > 1, significant increases in rms error were found, as expected, because of poor sampling. Therefore, we used an optimal setting, ζmax = 0.3, in subsequent experiments. When ζmax = 0.3, CPU time was reduced by approximately 50% without a significant increase in rms error.
Figure 6 shows the nadir-reflected radiances and corresponding errors when Ncol = 1000. The standard scheme S used no variance reduction technique except the MLE method (ζmax = 0.3). The impact of applying truncation approximation was clear. The DTA technique reduced noise imperfectly but significantly. No significant tendency was found for error depending on the brightness of radiance, suggesting a small biasing effect of DTA. Residual noise in scheme T with Fmax = 0.8 could be further smoothed by scheme V, mainly because ND relaxed spiky noise. Error reduction in scheme V from scheme T was mainly due to ND. As shown in Fig. 6, the error in scheme S or T does not resemble white noise. Smoothing is needed in some but not all locations. After Monte Carlo integration, it would be difficult to know where smoothing is needed. Using ND, smoothing can be performed for each sampling process with adaptive diffusion widths (35).
The numerical efficiency should be tested for various numbers of photon packets; a method that works well when low accuracy is required may not always be effective when a high degree of accuracy is needed. Figure 7 shows rms errors plotted against CPU time for Ncol = 103, 104, and 105. The square of the rms error is theoretically proportional to 1/Ncol for the standard scheme S. Scheme T with Fmax = 1 shows good performance when Ncol was as small as 103, but performance was poor for larger Ncol. This result reflects contamination by relatively large biases (∼1%) when Fmax = 1. We should set Fmax = 0.8 to achieve 1% accuracy; ND further improved the performance. These results suggest that the dependence of the ND length on the number of photon packets [as in (35)] is reasonable, reducing the noise in radiances independent of Ncol. The use of ND was found to be efficient, especially for small Ncol. This is a reasonable result because the area for ND decreases with increasing Ncol, and denoising artifacts are less pronounced. By using scheme V, the rms error was reduced by a factor of approximately 9 at the visible wavelength with Ncol = 105, even though the CPU time was almost the same in schemes S and V. The increased CPU time in scheme V as compared to scheme T with Fmax = 0.8 was mainly due to the use of the CF method with τmin = 10. Thus, scheme V successfully improved efficiency (equivalent computer-time reduction) by a factor of approximately 80 to obtain a fixed accuracy of 0.8% for the visible wavelength. This efficiency improvement was more notable in the visible wavelengths than in the near-infrared because the forward peaks of the phase function were sharper. For both wavelengths, accuracy of approximately 5% can be achieved by using Ncol = 103, and accuracy of approximately 1% can be expected when Ncol = 105.
By using large diffusion coefficients [as in (35)], noise becomes small, but the spatial distribution of computed radiance could be unrealistically smooth due to the smoothing artifact. Table 3 presents the rms error in scheme V with different diffusion coefficients. This error does not differ largely when the diffusion coefficients are changed by a factor of 2. Although the diffusion coefficients used in this study would work well for cases similar to this study, sensitivity studies are recommended to determine optimal values for coefficients in completely different cases.
3) Effect of the collision-forcing method: Case 2
To demonstrate the impacts of the CF method, an optically thin case (case 2 in Table 1) was selected here. In case 2, the domain averages of the total (cloud + aerosol + molecules) column optical thickness were approximately 1.2 at 0.64 μm and 1.1 at 2.13 μm. The surface was set as black, as in Table 1. In cloudless regions, scattering media were so thin that reflected radiance was very low. Radiances would thus be poorly sampled without the CF method, exhibiting large relative error. It is easy to imagine similar situations encountered in practical applications, not only in reflection but also in transmission. Figure 8 shows how the τmin used in the CF method affected numerical performance. For τmin = 0, no CF was used. As expected, the rms error was reduced by using the CF with a large τmin due to increased sampling (collision) frequency. Error (noise) reduction was significant at relatively small τmin. CPU times increased linearly with τmin. As a result, the numerical efficiency factor was maximized at τmin ∼ 5. For larger τmin, however, decreased efficiency with increasing τmin was possible (as for 0.67 μm). In this case, it was more efficient to use a larger number of photon packets than to force too many collisions. The merit of the CF method was more notable at 2.13 μm. Cloudless regions are optically very thin at this wavelength, leading to larger rms error and lower efficiency than at 0.67 μm. Although collisions were forced with constant fe for the entire domain, the efficiency could be further improved if the CF method was used only where needed, with varying fe. For example, if CF was used only for light with n = 0 (direct beams), then the first scattering would be forced to occur frequently. Results suggest that the CF method is useful for sampling radiances scattered from optically thin regions.
5. Summary and conclusions
Several variance reduction techniques have been proposed for the Monte Carlo radiative transfer model, and their efficiency has been demonstrated for cloudy cases. All of the techniques are energy conservative and usually have very small biases. One technique is an unbiased modification of local estimates for radiance calculations. This technique was introduced to reduce the computational burden required for sampling many small contributions from each scattering event, especially for highly anisotropic phase functions. According to Monte Carlo practice, sampling frequency can be successfully reduced without bias in calculation results on average and without significant increase in random noise. Another method is a truncation approximation that is well suited to the Monte Carlo model. The approximation transforms a sharply peaked scattering phase function to a linear mixture of Dirac’s delta function and a truncated phase function that is smoother than the original phase function. Using the method, a significant reduction can be expected in noise due to poor sampling of spiky peaks. A fraction of the peak truncation was set to increase with the diffusivity of the photon packets. The method resulted in very small biases (<0.3%), except for the solar aureole region with moderate cloud optical thickness (between 1 and 20). Numerical efficiency can be dramatically improved for solar radiances using this approximation, reducing both computation time and Monte Carlo noise. An efficiency improvement factor of approximately 80 was shown for visible wavelengths when we tried to achieve accuracy of 1%.
A collision-forcing method for optically thin media was also proposed. By using this method, the extinction coefficient can be modified to an arbitrary value larger than the original; consequently, single scattering albedo and the phase function are also modified according to similarity relations. This method can flexibly force frequent collisions where needed, thereby reducing Monte Carlo noise. In addition, compared with other methods of forcing collisions, modifications of the algorithm are minimal and easily implemented. Last, artificial numerical diffusion was proposed. In each sampling process, the energy of each photon packet is partitioned and redistributed to a rectangular horizontal area, the size of which is adaptively determined; the area is large when the sampled energy is large and when near-isotropic light travels a long path. Results showed that this method successfully smoothed noise in radiance and that the method worked well regardless of the number of trajectories.
All the proposed methods can be used in combination. According to the results presented in the previous section, we can expect about 1% accuracy by using a combination of the methods for pixel radiances with 105 photon packets per pixel in typical cases with cloud optical thickness of approximately 10. Better accuracy can be expected, of course, if more photon packets are used. Thus, these methods will still be valuable even when increased computing power becomes available in the future. Although all of the simulations in this paper were performed for monochromatic wavelengths, the computer time required for broadband calculations is independent of the number of spectral intervals and almost the same as for monochromatic calculations (e.g., Fu et al. 2000). This is a unique, attractive property of the Monte Carlo model. Thus, the merits of the model will be maximized when it is used for calculation of spectrally integrated radiative quantities.
The proposed methods are also useful for calculations of fluxes and heating rates, although we have restricted our discussion mainly to radiance in this paper. In particular, the collision-forcing method could significantly improve the efficiency of heating rate calculations in optically thin regions. In addition, numerical diffusion similar to that discussed in this paper could be incorporated for efficient sampling of fluxes and heating rates. Further discussion will be presented in a separate paper.
Acknowledgments
The author thanks Dr. Akira T. Noda of Tohoku University, Japan, for providing the large-eddy simulation data, and Dr. Tsuneaki Suzuki of the Japan Agency for Marine–Earth Science and Technology, Japan, for helpful comments. This work was partly supported by the Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Scientific Research [(A) 17204039].
REFERENCES
Antyufeev, V. S., 1996: Solution of the generalized transport equation with a peak-shaped indicatrix by the Monte Carlo method. Russ. J. Numer. Anal. Math. Model., 11 , 113–137.
Barker, H. W., J-J. Morcrette, and G. D. Alexander, 1998: Broadband solar fluxes and heating rates for atmospheres with 3D broken clouds. Quart. J. Roy. Meteor. Soc., 124 , 1245–1271.
Barker, H. W., R. K. Goldstein, and D. E. Stevens, 2003: Monte Carlo simulation of solar reflectances for cloudy atmospheres. J. Atmos. Sci., 60 , 1881–1894.
Booth, T. E., 1985: A sample problem for variance reduction in MCNP Los Alamos National Laboratory Rep. LA-10363-MS, 68 pp.
Cornet, C., J-C. Buriez, J. Riédi, H. Isaka, and B. Guillemet, 2005: Case study of inhomogeneous cloud parameter retrieval from MODIS data. Geophys. Res. Lett., 32 .L13807, doi:10.1029/2005GL022791.
Evans, K. F., 1998: The spherical harmonics discrete ordinate method for three-dimensional atmospheric radiative transfer. J. Atmos. Sci., 55 , 429–446.
Evans, K. F., and A. Marshak, 2005: Numerical methods. 3D Radiative Transfer for Cloudy Atmospheres, A. B. Davis and A. Marshak, Eds., Springer-Verlag, 243–281.
Fu, Q., M. C. Caribb, H. W. Barker, S. K. Krueger, and A. Grossman, 2000: Cloud geometry effects on atmospheric solar absorption. J. Atmos. Sci., 57 , 1156–1168.
Hönninger, G., C. von Friedeburg, and U. Platt, 2004: Multi axis differential optical absorption spectroscopy (MAX-DOAS). Atmos. Chem. Phys., 4 , 231–254.
Iwabuchi, H., and T. Hayasaka, 2003: A multi-spectral non-local method for retrieval of boundary layer cloud properties from optical remote sensing data. Remote Sens. Environ., 88 , 294–308.
Kawrakow, I., and D. W. O. Rogers, 2001: The EGSnrc code system: Monte Carlo simulation of electron and photon transport. NRCC Rep. PIRS-701, 287 pp. (revised in 2003).
Liou, K-N., 2002: Introduction to Atmospheric Radiation. 2d ed. Academic Press, 583 pp.
Macke, A., D. L. Mitchell, and L. V. Bremen, 1999: Monte Carlo radiative transfer calculations for inhomogeneous mixed phase clouds. Phys. Chem. Earth, 24B , 237–241.
Marchuk, G., G. Mikhailov, M. Nazaraliev, R. Darbinjan, B. Kargin, and B. Elepov, 1980: The Monte Carlo Methods in Atmospheric Optics. Springer-Verlag, 208 pp.
Martin, G. M., D. W. Johnson, and A. Spice, 1994: The measurement and parameterization of effective radius of droplets in warm stratocumulus clouds. J. Atmos. Sci., 51 , 1823–1842.
Modest, M. F., 2003: Radiative Heat Transfer. 2d ed. Academic Press, 822 pp.
Nakajima, T., and M. Tanaka, 1988: Algorithms for radiative intensity calculations in moderately thick atmospheres using a truncation approximation. J. Quant. Spectrosc. Radiat. Transfer, 40 , 51–69.
O’Hirok, W., and C. Gautier, 1998: A three-dimensional radiative transfer model to investigate the solar radiation within a cloudy atmosphere. Part I: Spatial effects. J. Atmos. Sci., 55 , 2162–2179.
Thomas, G. E., and K. Stamnes, 1999: Radiative Transfer in the Atmosphere and Ocean. Cambridge University Press, 517 pp.
Várnai, T., and R. Davies, 1999: Effects of cloud heterogeneities on shortwave radiation: Comparison of cloud-top variability and internal heterogeneity. J. Atmos. Sci., 56 , 4206–4224.
APPENDIX
Other Possible Truncation Approximations
There are several possible methods for truncation approximation that are well suited for the Monte Carlo model. In the preliminary study, we tested the accuracy of the methods described below.
Camera image–like hemispherical plots of local radiances viewed for a cloudy sky from a point on the surface: cloudy scenes of (a) case 1 and (b) case 2. The solar zenith angle was 60° for both cases. The color composite was made for 0.45, 0.55, and 0.67 μm for blue, green, and red channels, respectively.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
Algorithm flowchart for the Monte Carlo radiative transfer model used in this study. Processes in the rectangles with thick lines used random numbers.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
An example of the scattering phase function for water clouds with an effective radius of 10 μm at a wavelength of 0.67 μm. The dual-end truncation (presented in section 3b) is schematically shown with the parameters for fδ = 0.56.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
Normalized diffuse radiances of (a), (b) plane-parallel clouds, and corresponding relative biases with dual-end truncation approximations for (c), (d) τ = 1, (e), (f) τ = 5, and (g), (h) τ = 25. The solar zenith angle is 40°, and results are shown for the principal plane. Left plots correspond to reflected radiances, and right plots correspond to transmitted radiances. In each plot, the right side (positive view angle) corresponds to the forward-viewing hemisphere.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
(a) Rms error, (b) CPU time, and (c) the efficiency factor (see text for definition) for visible radiances computed for case 1 as functions of ζmin and Fmax. The number of photon packets used per column was Ncol = 103 per unit area for each simulation. Errors were evaluated with benchmark calculations that used Ncol = 108.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
(left) Top-of-atmosphere normalized nadir visible radiances reflected from the case 1 3D cloud scene and computed with schemes S, T, and V, using Ncol = 103. (right) Corresponding errors. Scheme S is the standard scheme, whereas scheme T used dual-end truncation approximation. Scheme V used all proposed techniques with the parameter set presented in Table 2.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
Performance comparisons among various schemes for case 1 scenes at wavelengths of (a) 0.67 μm and (b) 2.13 μm. The crosses, triangles, and circles on each curve correspond to Ncol = 103, 104, and 105, respectively.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
(a) Rmse, (b) CPU time, and (c) efficiency factor forradiances computed in case 2 as functions of τmin, using Ncol = 103.
Citation: Journal of the Atmospheric Sciences 63, 9; 10.1175/JAS3755.1
Summary of two cloud cases.
Parameter set used in scheme V.
Rms errors for scheme V, varying the coefficients for the numerical diffusion.