The day–night band (DNB) low-light-level visible sensor, mounted on the Suomi–National Polar-Orbiting Partnership (SNPP) satellite, can measure visible radiances from the earth and atmosphere (solar/lunar reflection, and natural/anthropogenic nighttime light emissions) during both day and night and can achieve unprecedented nighttime low-light-level imaging with its accurate radiometric calibration and fine spatiotemporal resolution. Based on the good characteristics of DNB, a multichannel threshold (MCT) algorithm combining DNB with other Visible–Infrared Imager–Radiometer Suite (VIIRS) channels is proposed to monitor nighttime fog/low stratus. Through a gradual separation of the underlying surface (land, vegetation, water bodies, and city lights), snow, and high/medium clouds, a fog/low-stratus region can ultimately be extracted by the algorithm. Then, the algorithmic feasibility is verified by three typical cases of heavy fog/low stratus in China. The experimental results demonstrate that the outcomes of the MCT algorithm approximately coincide with the ground-measured results. Furthermore, the MCT algorithm shows promise for nighttime fog/low-stratus detection in some example cases with about a 0.84 average probability of detection (POD), a 0.73 average critical success index (CSI), and a 0.15 average false alarm ratio (FAR), which reveals some improvement over the conventional dual-channel difference (DCD) algorithm.
Fog and low stratus, which are associated with poor visibility and air quality, are common obstacles to traffic on land, in air, and at sea. Therefore, their accurate description in time and space is significant for societies and economies (Pagowski et al. 2004; Cermak and Bendix 2008). While ground meteorological stations provide information about fog and low-stratus episodes, the data acquired by station observations are spatially discontinuous and temporally dispersed (Cermak et al. 2009; Chaurasia et al. 2011). In comparison, weather satellite data have the advantage of continuous spatial coverage while providing reliable near-real-time information on the spatiotemporal distribution of fog and low stratus (Cermak and Bendix 2007; Cermak et al. 2009).
Research on fog and low-stratus detection using satellite data has been carried out since the 1970s. As a result of the limitations of spectral channels on early infrared radiation detectors, most research focused on monitoring and nowcasting daytime fog and low stratus by means of visible and near-infrared spectral channels. Gurka (1974) employed the visible imagery of the Synchronous Meteorological Satellite-1 (SMS-1) to analyze the dissipation process of radiation fog; Gurka figured out the direction was from the outer to the inner edge. Olivier (1995) applied Meteosat images to infer the annual and seasonal fog distribution in the Namib in accordance with the general spatial characteristics of fog. The results revealed a high consistency with ground measurements. Bendix et al. (2006) proposed a classification scheme that used the Streamer radiative transfer code (Key and Schweiger 1998) to derive the minimum and maximum albedo thresholds for daytime fog and low-stratus detection in Moderate Resolution Imaging Spectroradiometer (MODIS) solar bands 1–7 (0.62–2.155 μm), and the validation of the final fog and low-stratus mask generally showed a satisfactory performance. With the gradual improvement for spectral channels on infrared radiation detectors, it became possible to use infrared spectral channels for nighttime fog and low-stratus detection. Common techniques rely on the emissivity difference of fog/low-stratus droplets between infrared and middle-infrared wavelengths (Cermak and Bendix 2007). While both emissivities are approximately the same for larger droplets, the small droplets found in fog and stratus are less emissive at 3.9- than at the 10.8-μm channel (Hunt 1973; Cermak and Bendix 2008). Applying Hunt’s theory, Eyre et al. (1984) first implemented the dual-channel difference (DCD) algorithm on the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR). Subsequently, many researchers applied similar techniques to various other types of sensors, which promoted the development of the DCD algorithm (e.g., Dybbroe 1993; Bendix 2002; Cermak and Bendix 2007; Chaurasia et al. 2011; Mosher 2013). However, Ellrod (1995) illustrated some shortcomings of the DCD algorithm, such as poor performance on layers of very thin fog or stratus (<100 m), occasional confusion between high/medium stratus and fog, and an inapplicability to certain types of soils, which produce similar signals at the 3.9- and 10.8-μm channels.
Meanwhile, the rapid advancement of the low-light-level sensor made obtaining visible imagery under low illumination conditions at twilight or night a reality, which provided a fresh approach to the issue (Lee et al. 2010; Kuciauskas et al. 2013). Zhou et al. (2012) presented a bichannel threshold algorithm combining the use of visible and infrared data from the Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS), and the feasibility of the algorithm was preliminarily verified by validation experiments. However, visible data available on OLS are partitioned into limited 64 gray levels and infrared data into 256 levels (Lee et al. 2006). The lack of calibration and coarse radiometric resolution confine its utility in quantitative environmental applications. The Visible–Infrared Imager–Radiometer Suite (VIIRS), launched onboard Suomi–National Polar-Orbiting Partnership (SNPP) on 28 October 2011, solves the problem with its day–night band (DNB) low-light-level sensor. It provides accurate radiometric calibration and favorable nighttime imaging capacity, and shows great potential for quantitative remote sensing at twilight or night (Lee et al. 2006; Miller et al. 2012, 2013).
Based on the good characteristics of DNB, we propose a multichannel threshold (MCT) algorithm combining DNB with other VIIRS channels to monitor nighttime fog and low stratus. The paper is organized as follows. Section 2 introduces the data used in the MCT algorithm. Section 3 elaborates on the theoretical principles and practical techniques for the MCT algorithm. Section 4 presents validation experiments, compares the MCT algorithm to the existing DCD algorithm, and discusses the relevant results. Section 5 summarizes the conclusions of this study.
The MCT algorithm for nighttime fog and low-stratus detection has been carried out using SNPP VIIRS data. SNPP VIIRS has 22 channels ranging in central wavelength from 0.412 to 12.01 μm in the electromagnetic spectrum (Seaman et al. 2014). The distinct radiance characteristics in certain spectral channels, which cause the radiance differences received by satellite sensors, exist among fog/low stratus, the underlying surface, snow, and high/medium clouds. For example, liquid water clouds are much brighter than snow in the shortwave infrared (1.3–3 μm), especially near 1.61 μm (Crane and Anderson 1984). According to these radiance differences, five spectral channels are used in the MCT algorithm. The selected channels are presented and summarized in Table 1.
VIIRS data are distributed in three primary forms with NASA terminology (Hillger et al. 2013): raw data record (RDR), sensor data record (SDR), and environmental data record (EDR). Raw data from the satellite are transmitted to Earth as RDR files. The RDR files are calibrated and converted to SDR files containing the calibrated radiation, reflectance, and/or brightness temperature data along with other data collected by the instrument (e.g., geolocation information). A limited number of SDR files are processed and mapped to a Ground Track Mercator (GTM) projection and are, thus, converted to EDR files (Seaman et al. 2014). Considering that the EDR data products have the advantage of constant spatial resolution, which is beneficial in matching the spatial location for each spectral channel (Schueler et al. 2013), seven VIIRS EDR data products are utilized in the algorithm: the DNB EDR data file (VNCCO), the DNB EDR geolocation file (GNCCO), the I-band EDR data files (VI1BO, VI2BO, VI3BO, and VI5BO), and the I-band EDR geolocation file (GIGTO). The VNCCO product provides top of the atmosphere (TOA) reflectance for each pixel and quality flag information. The GNCCO product provides the corresponding latitude, longitude, solar zenith angle, lunar zenith angle, lunar azimuth angle, satellite zenith angle, satellite azimuth angle, moon phase angle, and moon illumination fraction. The I-band EDR data files contain the brightness temperature or reflectance, radiation data, and quality flag information. The GIGTO product contains the corresponding latitude and longitude. The charge-coupled device (CCD) of DNB and I-band are mounted in different focal planes, so the two EDR geolocation files are utilized to match spatial pixels (Baker 2011a).
3. Algorithm description
The MCT algorithm is executed by a chain of processes for background removal and a surface homogeneity test. The overall structure of the MCT algorithm is outlined in Fig. 1.
The MCT algorithm starts with inputting EDR data products of the five selected spectral channels. The first step is to preprocess the input data by executing the not applicable (N/A) fill value deletion and spatial pixel matching. Both the EDR data files and geolocation files contain some N/A fill values with the integer 65 535 or floating point number −999.9, which are set to keep the array size constant in each file (Seaman et al. 2014). Thus, the N/A fill value deletion can be accomplished by searching and eliminating those N/A fill values. Additionally, as depicted in section 2, the two EDR geolocation files (GNCCO and GIGTO) are both created by mapping to a GTM projection, which forces each pixel to maintain a constant size with constant horizontal resolution in the cross- and along-track directions. Namely, GNCCO and GIGTO have spatial resolutions of 750 and 375 m, respectively. Therefore, the spatial pixel matching can be completed by a simple geolocation matchup. The second step involves removing the three types of backgrounds and then obtaining the preliminary test results. The procedure is highlighted in detail in the following subsections. The final step is to examine the surface homogeneity for the preliminary test results. Finally, a pixel that meets all the prescribed criteria of each step is ultimately identified as a fog or low-stratus pixel.
a. Removal of the underlying surface pixels
1) Removal of land/vegetation/water body pixels
During nighttime, the radiation received by the DNB low-light-level sensor at 0.5–0.9 μm depends on the reflectance of features and the lunar phases (except natural/anthropogenic light emissions, mainly city light radiances). According to the theory of Mie (1908), the scattering of fog/clouds mainly composed of small liquid or solid particles is relatively obvious. The reflectance of fog/clouds/snow therefore is higher than that of the underlying surface, including land/vegetation/water bodies. So, the land/vegetation/water body pixels can theoretically be removed by setting a proper reflectance threshold. As each type of target has a relatively fixed reflectance range, the threshold may vary with time and location to some extent for different viewing geometry and physical/optical characteristics of targets. Therefore, it is necessary to design a procedure for the dynamic retrieval of a proper threshold (Cermak and Bendix 2007, 2008). Otsu (1979) proposed an adaptive method for computing a threshold for grayscale image segmentation. The method recognizes an image as a composition of two types: the backgrounds and the objectives. The level of difference is measured by the variance between the two types, where a gray value that causes the maximum variance is considered to be the best threshold. For the reflectance values in a VNCCO data file (recorded as 16-bit unsigned integers: 0–65 535), the data can be regarded as gray values in a 16-bit grayscale image. Assuming that the image has a size of M × N, a gray level of L, and the total number of Ni pixels, where the gray value i is 0 ≤ i ≤ L − 1, the occurrence probability of the gray value i can be calculated as . Accordingly, the best threshold of the gray value G can be given by
where g is the gray value ranging from 0 to L − 1 and the other parameters are defined as follows:
The information calculated from Eq. (1) is used later in the text.
2) Removal of city light pixels
At night, sources of natural/anthropogenic light emissions (e.g., city lights, fires, lightning, fishing boats, and gas flares) can also be detected by the DNB low-light-level visible sensor (Lee et al. 2006). Compared to other sources of light emissions, city lights, which usually cover a relatively large area in a low-light-level image of DNB, are generally more stable and more continuous both in time and space. Although some research has been accomplished to map city lights on the earth’s surface by using nighttime DMSP OLS data, the relevant techniques appear to be complicated in detail (Sullivan 1989; Elvidge et al. 1997; Baugh et al. 2010). To suppress interference caused by city lights in the fog/low-stratus detection, a method that uses nighttime DNB EDR data to roughly identify city light pixels is designed and discussed in this section. While most reflective features have a radiance range between 1.0 × 10−12 and 1.0 × 10−8 W cm−2 sr−1 from a new moon to a full moon through moonlight reflection (Miller et al. 2012), the typical nighttime light radiances have values between 1.0 × 10−9 and 3.0 × 10−7 W cm−2 sr−1 (Cao and Bai 2014). So, city lights should be brighter than those reflective features in a low-light-level image of DNB, especially when the lunar phase is close to a new moon. According to the differences among the gray values, a proper threshold can be manually determined to remove city light pixels by analyzing the statistical distribution of gray values in a VNCCO data file. To present our method, a typical 100 × 100 pixel region from a low-light-level image of DNB containing city lights, clouds, and the underlying surface is chosen as an example (1904 UTC 2 December 2012; 30.2649°–31.0933°N, 103.5187°–104.4795°E). The outcome is shown in Fig. 2.
As displayed in Fig. 2b, the gray-level histogram can be divided into three parts on the whole. The small, medium, and relatively larger gray value parts correspond to land/vegetation/water body pixels; fog/cloud/snow pixels; and city light pixels, respectively. The gray value G1 is manually chosen as a threshold to remove city light pixels. The process for the underlying surface removal is displayed in Fig. 3.
3) low lunar illumination instances
As the DNB has a dynamic radiance range between 3.0 × 10−9 and 0.02 W cm−2 sr−1, the signal-to-noise ratio (SNR) is ~9 or greater when the received radiation is greater than the required minimum radiance level of Lmin = 3.0 × 10−9 W cm−2 sr−1 (Liao et al. 2013; Cao and Bai 2014; Liang et al. 2014). For a quarter moon or less, reflective features will become difficult to recognize because of the lack of moonlight illumination (Lee et al. 2006). Moreover, the SNR will be relatively lower if the lunar phase is closer to a new moon, which also increases the difficulty associated with removing the underlying surface. To validate the applicability of the process to different lunar phases, five typical cases of heavy fog/low stratus that occurred through an entire lunar cycle in China are employed. Considering that a low-light-level image created by an entire VNCCO file includes too large an area, only a portion of each image is chosen to focus on the suspected cloudy or foggy regions before executing the process. The outcomes are shown in Fig. 4.
As can be seen from Figs. 4a and 4b, the gray-level histograms present a unimodal distribution for a quarter moon or less. This may be caused by low SNR of the low-light-level images as shown in Figs. 4f and 4g. Because the Otsu method applies to an image with a bimodal gray-level distribution, it seems unable to determine a threshold G under the circumstances. In addition, Figs. 4c–e all appear to exhibit a roughly bimodal distribution for a half moon or more. While the two peaks correspond to land/vegetation/water bodies and fog/clouds/snow/city lights, respectively, the troughs represent the edges between backgrounds and objectives. As a result of a variant proportion of backgrounds and objectives in each low-light-level image, the depth of each trough is generally different from image to image. While Fig. 4e presents a relatively shallow trough, deep troughs are shown in Figs. 4c and 4d. However, the outcomes of threshold determination indicate that the Otsu method locates the troughs precisely in each situation. Moreover, Figs. 4k–m appear to reveal encouraging results after the removal of the underlying surface [the threshold G1 for the removal of city light pixels is chosen referring to the method in section 3a(2)].
b. Removal of snow pixels
Clouds and snow are both bright across the visible and near-infrared regions (0.4–1.3 μm), but clouds are much brighter than snow in the shortwave infrared (1.3–3 μm), especially near 1.61 μm, which is a result of the smaller size of the scatterers in clouds decreasing the probability of absorption in this spectral region where snow is moderately absorptive (Crane and Anderson 1984; Dozier 1984, 1989; Hall et al. 1995). On the basis of the difference, the normalized difference snow index (NDSI) and normalized difference vegetation index (NDVI) are used to remove snow pixels, referring to the snow classification techniques of the snow-mapping MODIS algorithm, SNOMAP (Hall et al. 2001). For the VIIRS EDR data, the NDSI and NDVI are calculated as
where , , and mark the reflectance in the 0.64- (I1), 0.865- (I2), and 1.61-μm (I3) VIIRS channel, respectively. Generally, a pure snow-covered pixel maintains a relatively high NDSI and reflectance. A pixel that meets the criteria NDSI ≥ 0.4 and RI2 ≥ 0.11 is classified as a snow pixel. When the snow pixel is located under a forest canopy, the NDSI decreases. On the premise that 0.1 < NDSI < 0.4, a pixel can still be classified as a snow pixel if it fits the following thresholds: NDVI < NDVI1 and NDVI > NDVI2. The thresholds are determined as a function of NDSI (Baker 2011b):
Considering that the reflectance values in the I1, I2, and I3 channels are not available during nighttime, snow pixels are removed by means of the adjacent daytime data. For instance, concerning the typical case of heavy fog/low stratus that occurred at 1904 UTC 2 December 2012, adjacent daytime data including a 600 × 600 pixel region (0701 UTC 1 December 2012; 26.9213°–31.5994°N, 93.2319°–98.6304°E) are chosen as an example to present how to identify snow pixels using this process. The identification outcome of snow pixels is shown in Fig. 6.
c. Removal of high/medium cloud pixels
At night, the emitted radiation from objects themselves is the main energy source received by the satellite sensors in the longwave infrared region (8–14 μm). Compared with fog/low stratus and the underlying surface, high/medium clouds maintain a greater vertical height. Accordingly, the brightness temperatures (BTs) of high/medium clouds are normally much lower than those of fog/low stratus and the underlying surface. Although thin cirrus clouds can have relatively warm BTs as a result of significant radiation from the surface making it through to the satellite, they generally cover a small area in an infrared image. Based on the BT differences, the removal of high to medium clouds can be executed by manually determining an appropriate BT threshold. For the BT values in a VI5BO data file, describing the statistical distribution of BT values helps determine a threshold dynamically. The process for the removal of high/medium clouds is displayed in Fig. 7.
The three cases during a half moon at least are used to validate the process. The outcomes are shown in Fig. 8.
As displayed in Figs. 8a–c, when the BT values are T = −7°, T = −10°, and T = 0°C, respectively, the BT histograms show boundaries to an extent. The implication is that the value T corresponds to the infrared image edges between high/medium clouds and other features. Thus, the value T can be manually chosen as the BT threshold. The removal effect presented in Figs. 8g–i appears to perform well.
d. Surface homogeneity test
In terms of each preliminary test result, the fog/low-stratus entity is assumed to be fairly continuous and homogeneous (Cermak and Bendix 2005, 2008). As a matter of fact, there exist a number of isolated fog/low-stratus pixels influenced by noise. To find and further eliminate the isolated pixels, the surface homogeneity is tested. The formula for calculating the surface homogeneity is given by
where SH refers to the level of the surface homogeneity, μ and σ are the mean and standard deviation of gray values for a 3 × 3 pixel area, respectively (the gray values are set as 1 for suspected fog/low-stratus pixels, and 0 for others.). The SH is calculated for every 3 × 3 pixel area where the central pixel appeared to be fog/low stratus in the preliminary test result, and only if it goes beyond a threshold (0.22) is the central pixel eventually identified as a fog/low-stratus pixel. The threshold is determined to assure that there exist another two suspected fog/low-stratus pixels at least around the central pixel, because the SH equals 0.22 for a 3 × 3 pixel area including a total of three suspected fog/low-stratus pixels.
4. Validation experiments
To validate the feasibility of the MCT algorithm and compare it to the existing DCD algorithm, experiments have been performed on three typical cases of heavy fog/low stratus occurring during a half moon or more in China. The satellite data collected at 1900 UTC 27 September 2013 (case 1), 1904 UTC 2 December 2012 (case 2), and 1824 UTC 30 August 2012 (case 3) are used to execute both the MCT and DCD algorithms. For the ground-based meteorological data that are recorded every 3 h, the data collected at 1800 UTC 27 September 2013, 1800 UTC 2 December 2012, and 1800 UTC 30 August 2012 are employed to validate the accuracy of both algorithms. The ground meteorological stations recognize fog and low stratus by the criteria of horizontal visibility (HV), relative humidity (RH), and cloud-base height (CBH): HV ≤ 10 000 m (fog, HV ≤ 10 000 m; light fog, 1000 < HV ≤ 10 000 m), RH ≥ 95%, and CBH ≤ 2500 m. As shown in Figs. 9–11, the fog/low-stratus results are obtained by both algorithms and verified by the ground-measured results.
In Figs. 9c, 9d, 10c, 10d, 11c, and 11d, the gray areas represent fog/low stratus monitored by satellite. In Figs. 9e, 9f, 10e, 10f, 11e, and 11f, the ground-measured results are displayed as points with colors. For case 1, Fig. 9c shows that the fog/low stratus mainly exists in southwestern and southeastern China, marked with the following letters: A, for the area east of Sichuan Province, Guizhou Province, and Chongqing city; B, for the area west of Guangdong Province and the central region of Jiangxi Province; and C, for Hubei Province. The ground-measured results verify that area A is covered with a widespread fog layer as shown in Fig. 9e, while areas B and C have low stratus instead, as shown in Fig. 9f. For case 2, Fig. 10c shows that the fog/low stratus occurs throughout extensive areas of southern China, marked with the following letters: A, for the area east of Sichuan Province; B, for Guizhou Province; C, for Guangxi Province; and D, for Hunan Province. While Fig. 10e indicates that areas A and D are covered with fog layers, Fig. 10f confirms that areas B and C have a wide range of low stratus. For case 3, Fig. 11c shows that the fog/low stratus is distributed primarily in northern China and adjacent North Korea, marked with the following letters: A, for the central region of Liaoning Province; B, for the border between China and North Korea; and C, for the east of Shandong Province. The corresponding regions are well verified by Fig. 11e.
The experimental results demonstrate that the fog/low-stratus areas monitored by the MCT algorithm basically coincide with the ground-measured results. In contrast, when using the DCD algorithm, there appear to be some obvious undetected fog/low-stratus pixels in all three of the cases, marked with triangles in Figs. 9d, 10d, and 11d, which confirms the shortcomings of the DCD algorithm to some extent, as mentioned in section 1. Additionally, both algorithms are found to be invalid in cases of overlapping clouds that are marked with stars in Figs. 9c, 9d, 9f, 10c–e, and 11c–f, because only the uppermost layer can be detected reliably.
To further investigate the feasibility of the algorithm, an accuracy analysis was performed. The chosen parameters are false alarm ratio (FAR), probability of detection (POD), and the critical success index (CSI) (Bendix et al. 2004). They are defined as
where NH, NM, and NF indicate the numbers of hits, misses, and false alarms, respectively. A hit is defined as an instance that is classified as fog/low stratus by both the algorithmic and ground-measured results. For a miss, the ground-measured result recognizes the station as fog/low stratus, but not the algorithmic result. For a false alarm, the algorithmic result identifies the station as fog/low stratus, but not the ground-measured result. The hits, misses, and false alarms are displayed in Fig. 12.
Because the SNPP satellite scans with a view angle ranging from −56.063° to 56.063°, the geolocation information for fog or cloud layers (included in satellite geolocation files) generally differs from the real location of the earth’s surface because of parallax. While the parallax can be neglected for very thin fog or cloud layers with a top height close to the earth’s surface, it becomes increasingly evident when the fog or cloud layers exist at a higher altitude. Under these circumstances, the atmospheric profile observed vertically from the ground meteorological station does not match that viewed from the satellite (Cermak and Bendix 2008). To correct for the parallax, a 5 × 5 pixel matching method is utilized to optimize the general 1 × 1 pixel matching method. While the 1 × 1 pixel matching method regards the ground meteorological station and the nearest single pixel seen from the satellite as a matchup, the 5 × 5 pixel matching method treats each 5 × 5 pixel square as a whole. Consequently, all parameters are computed not only based on the 1 × 1 pixel matching method, but also based on the 5 × 5 pixel matching method.
For case 1, a total of 282 ground meteorological stations are available. The 1 × 1 pixel matching method produces the following outcome: NH = 121, NM = 49, and NF = 20. The 5 × 5 pixel matching method generates another outcome: NH = 129, NM = 44, and NF = 23. For case 2, a total of 303 ground meteorological stations are available. While the 1 × 1 pixel matching method produces an outcome (NH = 143, NM = 51, and NF = 21), the 5 × 5 pixel matching method generates another outcome (NH = 177, NM = 40, and NF = 23). A similar situation applies to case 3 (207 stations available in total) with a different outcome from NH = 77, NM = 35, and NF = 16 to NH = 85, NM = 30, and NF = 19 (the outcomes above are produced by the MCT algorithm, and a similar analysis is available for the DCD algorithm). Table 2 summarizes the results of the accuracy analysis scheme.
As described above, both algorithms become invalid in cases of overlapping clouds (marked with stars in Fig. 12, which causes a decline of the POD and CSI). To exclude those instances with overlapping clouds, the three parameters are recalculated after omitting the overlapping clouds that are identified by a combined use of the removal results of high/medium clouds in Fig. 8 and the ground-measured results in Figs. 9–11. For case 1, there are 16 stations that need to be omitted, which causes a change: NM = 33 (1 × 1 pixel matching method) and NM = 28 (5 × 5 pixel matching method). For case 2, there are 15 stations that need to be omitted, which brings about a change: NM = 36 (1 × 1 pixel matching method) and NM = 25 (5 × 5 pixel matching method). A similar variation applies to case 3 (10 stations omitted) with the following changes: NM = 25 (1 × 1 pixel matching method) and NM = 20 (5 × 5 pixel matching method). Table 3 summarizes the recalculation results after omitting the overlapping clouds.
Table 3 reveals that the MCT algorithm achieves about a 0.15 average FAR, a 0.84 average POD, and a 0.73 average CSI for the three cases (5 × 5 pixel matching method after omitting the overlapping clouds). A low FAR demonstrates relatively few situations of overestimation, a high POD illustrates relatively few situations of underestimation, and a high CSI confirms the overall good accuracy of the MCT algorithm. Comparatively, the DCD algorithm achieves about a 0.16 average FAR, a 0.75 average POD, and a 0.66 average CSI. While the FAR computed by both of the algorithms are almost the same, the POD and CSI achieved by the MCT algorithm are slightly greater than those computed by the DCD algorithm. Therefore, in terms of the three cases, it can be summarized that the MCT algorithm has better accuracy for nighttime fog/low-stratus detection than does the existing DCD algorithm. Additionally, the accuracy of case 2 is slightly better than the results for cases 1 and 3, which may be related to some factors such as the number of available ground meteorological stations, the thickness of the fog/low stratus, and the time lag between the satellite and ground-measured data. When comparing with the 1 × 1 pixel matching method, the results of the 5 × 5 pixel matching method show some improvements for all parameters, which indicates the necessity to correct for the parallax. Moreover, in contrast to Table 2, the FAR in Table 3 remains the same, but both the POD and CSI increase slightly. It proves that a certain number of overlapping cloud situations indeed exist in the three chosen cases.
The DNB low-light-level sensor with accurate radiometric calibration and fine spatiotemporal resolution has shown great promise in promoting the development of nighttime sounding techniques. However, research on its quantitative environmental applications is still at an early stage. Regarding the topic of nighttime fog/low-stratus detection, an MCT algorithm is proposed based on DNB and other VIIRS channels. The algorithm regards the fog/low stratus as targets, and the underlying surface, snow, and high/medium clouds as backgrounds. First, by analyzing the reflectance or radiance differences between the targets and backgrounds in DNB, and the I1, I2, I3, and I5 channels, a series of thresholds for removing the backgrounds are determined and implemented to get a preliminary test result. Second, according to the continuous and homogeneous characteristic of the targets, a surface homogeneity test is employed to further eliminate the noise in the preliminary test result, and thus the final result is an identification of nighttime fog/low stratus. Three typical cases of heavy fog/low stratus that occurred during a half-moon (or more) phase are analyzed with certain experiments to validate the feasibility of the MCT algorithm and compare it to the existing DCD algorithm. The experimental results demonstrate the better accuracy of the MCT algorithm from both qualitative and quantitative perspectives.
Although the MCT algorithm has achieved a satisfactory performance in the validation experiments, some limitations exist in the present algorithm, and several problems remain to be settled in future research. For one thing, the Otsu method identifies the best threshold of gray values precisely under the conditions that a marked difference exists between the targets and backgrounds. The practical gray-level histograms, however, occasionally present a unimodal distribution due to the low SNR of the low-light-level images during a quarter moon or less, which makes a proper threshold for the removal of land/vegetation/water bodies difficult to determine. Meanwhile, it should be noted that the DCD algorithm can be applied under very low lunar illumination, which may be considered to be an advantage over the MCT algorithm. In addition, the technique for removing high/medium clouds only utilizes the I5 channel (11.45 μm) to determine a dynamic BT threshold. While this method works well in certain situations, the technique neglects the impacts from thin cirrus whose infrared temperature tends to be greater than its true temperature because of some additional radiation received from the surface and low clouds (Mosher 2013), and intense inversion occasionally occurring in high-latitude atmospheric conditions during winter nights, which may result in the near-surface temperature being colder than the temperature of high/medium clouds, thus causing the technique to misdiagnose fog/low stratus as high/medium clouds (Pike 2013). For these problems, the split-window technique using the BT differences between the VIIRS M15 (10.76 μm) and M16 (12.01 μm) channels (Godin 2014) is designed to be added to the present technique for thin cirrus detection. The NCEP Global Data Assimilation System (GDAS) temperature profiles, in conjunction with the I5 channel infrared temperature, are also planned for retrieving cloud-top height (CTH) and further removing high/medium clouds. The algorithm proves to be invalid in cases of overlapping clouds, for the satellite cannot obtain radiance information through the whole atmospheric layer. In addition, city lights may just appear under a fog/low-stratus layer, whether they should be removed or not therefore needs further analysis. Furthermore, the correction for the parallax may be better done by a combined use of the scan angle information and retrieved CTH in future work. Finally, fog and low stratus are treated as one type in this paper, so the feasibility of discrimination between fog and low stratus using NPP data also requires further study.
The authors thank the National Oceanic and Atmospheric Administration (NOAA) for making the SNPP VIIRS data products publicly available. We acknowledge the SNPP VIIRS science team for the high quality products. The SNPP VIIRS data products were downloaded online (http://www.class.ncdc.noaa.gov/saa/products). We also thank the China Meteorological Administration (CMA) for providing ground-based meteorological data. This work was supported by the National Natural Science Foundation of China (Grants 41375029 and 41575028).