A Method of Visibility Detection Based on the Transfer Learning

Qian Li College of Meteorology and Oceanography, National University of Defense Technology, Nanjing, China

Search for other papers by Qian Li in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-9530-4925
,
Shaoen Tang College of Meteorology and Oceanography, National University of Defense Technology, Nanjing, China

Search for other papers by Shaoen Tang in
Current site
Google Scholar
PubMed
Close
,
Xuan Peng College of Meteorology and Oceanography, National University of Defense Technology, Nanjing, China

Search for other papers by Xuan Peng in
Current site
Google Scholar
PubMed
Close
, and
Qiang Ma College of Meteorology and Oceanography, National University of Defense Technology, Nanjing, China

Search for other papers by Qiang Ma in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Atmospheric visibility is an important element of meteorological observation. With existing methods, defining image features that reflect visibility accurately and comprehensively is difficult. This paper proposes a visibility detection method based on transfer learning using deep convolutional neural networks (DCNN) that addresses issues caused by a lack of sufficient visibility labeled datasets. In the proposed method, each image was first divided into several subregions, which were encoded to extract visual features using a pretrained no-reference image quality assessment neural network. Then a support vector regression model was trained to map the extracted features to the visibility. The fusion weight of each subregion was evaluated according to the error analysis of the regression model. Finally, the neural network was fine-tuned to better fit the problem of visibility detection using the current detection results conversely. Experimental results demonstrated that the detection accuracy of the proposed method exceeds 90% and satisfies the requirements of daily observation applications.

Denotes content that is immediately available upon publication as open access.

© 2019 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Associate Professor Qian Li, public_liqian@163.com

Abstract

Atmospheric visibility is an important element of meteorological observation. With existing methods, defining image features that reflect visibility accurately and comprehensively is difficult. This paper proposes a visibility detection method based on transfer learning using deep convolutional neural networks (DCNN) that addresses issues caused by a lack of sufficient visibility labeled datasets. In the proposed method, each image was first divided into several subregions, which were encoded to extract visual features using a pretrained no-reference image quality assessment neural network. Then a support vector regression model was trained to map the extracted features to the visibility. The fusion weight of each subregion was evaluated according to the error analysis of the regression model. Finally, the neural network was fine-tuned to better fit the problem of visibility detection using the current detection results conversely. Experimental results demonstrated that the detection accuracy of the proposed method exceeds 90% and satisfies the requirements of daily observation applications.

Denotes content that is immediately available upon publication as open access.

© 2019 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Associate Professor Qian Li, public_liqian@163.com

1. Introduction

Atmospheric visibility is defined as the greatest distance at which a prominent dark object against the sky at the horizon can be observed and identified by an unaided eye (American Meteorological Society 2019). Low-visibility conditions can impact traffic safety and military operations significantly. Therefore, visibility is an important element of surface meteorological observations. Although there has been some progress in automatic visibility detection methods and equipment, the measurement accuracy and stability of existing methods is still inferior to that of manual observations. The definition of visibility is subject to interpretation, and the measurement results are influenced by many factors, such as illumination, background ambience and so on. Thus, developing a method to detect visibility automatically and precisely is still challenging.

Existing visibility detection methods primarily involve visual measurement, optical instrument–based measurement, and image-based measurement. Visual measurement estimates visibility manually in which the detection accuracy is affected by subjective interpretations, the visual acuity of the observer, and the selection of reference objects (Lakshmi et al. 2017). In addition, the detection frequency is also limited. Optical instrument–based measurement primarily depends on transmissometers and scatterometers (Miclea and Silea 2015). These instruments use the detection results obtained in a limited sampling space to represent that of the full range. Thus, the accuracy of detection is slightly affected by atmospheric conditions in the local space. In addition, the equipment used in such system is prohibitively expensive for wide practical application.

Recently, cameras have been extensively used in traffic, security, and other monitoring fields, because they provide wide coverage, as well as rich and accurate information of the scene. For these reasons, they are also used to measure visibility by utilizing the effect of blurring and degradation to images and videos in low-visibility conditions. The existing image- and video-based methods can be classified as model and data driven (Hautière et al. 2010). In model-driven methods, the light attenuation effects on images and videos, such as refraction and reflection, caused by suspended liquid and solid particles in air are analyzed, then a certain physical model of light propagation is setup based on Koschmieder’s law, in which the model parameters such as the extinction coefficient are estimated to measure visibility (Babari et al. 2011a; Hautière et al. 2006; Babari et al. 2011b; Graves and Newsam 2011). For example, Cheng et al. (2018) estimated the extinction coefficient through the piecewise functional fitting of observed luminance curves and measured the visibility value according to the International Commission on Illumination (CIE) visibility formula. Xiang et al. (2013) extracted the average Sobel gradient and dark channel ratio of images to construct a visibility estimation model. In addition, Li et al. (2019) estimate the extinction coefficient of the observed atmosphere by combining the dark channel prior and the edge collapse–based transmission refinement. Typically, the estimation accuracy of model-driven methods is closely related to the definition of the physical model and the estimated precision of model parameters. However, different particles in air have different effects on light propagation and cause different attenuation. Consequently, it is difficult to define a precise light propagation model that can suit different suit conditions (Malm et al. 2018). Moreover, the space distribution of particles in air is frequently nonuniform, thus the measurement result for a local space may not accurately reflect the visibility of globe space.

The effect of degradation to images in low-visibility conditions are also considered in data-driven methods, in which image features are mapped to the estimated visibility directly by training with previously annotated data without specific physical model of light propagation. Usually designing appropriate image features is critical for data-driven methods and the features both in spatial domain and frequency domain are considered. For instance, Yin et al. (2011) extracted local contrast of image and model the relationship between the features and visibility with supervised learning. The Sobel mask and the root-mean-square are extracted in Pokhrel and Lee (2011). Except the above feature in spatial domain, features in frequency domain were also considered. For example, the high-frequency components were separated from high-pass filter such as FFT and then used to estimate the visibility (Luo et al. 2005; Xie et al. 2008; Liaw et al. 2009). However, both traditional model-driven and data-driven methods used hand-crafted features and may not cope with complex illumination environments by luminance and contrast changes. Recently, deep learning, which can utilize large-scale datasets effectively to complete recognition or classification tasks, has achieved significant progress and been widely used in machine vision. Especially deep convolutional neural networks (DCNNs) are receiving more and more attention because of its ability of automatic learning multiscale representative features of the image with multilayer convolution structures. Chaabani et al. (2017) trained an artificial neural network from image to estimate the visibility range. To detect atmospheric visibility, we adopt DCNNs to extract image features that reflect the attenuation effect of the atmosphere in this paper. However, training a DCNN typically requires massive amounts of quality-annotated training samples, which is rather difficult to obtain for the following reasons. First, annotation work requires significant time and ensuring the precision of the annotated value is difficult because it is influenced by the observer’s expertise and subjective interpretations. Second, the visibility is significantly affected by weather conditions, while the number of severe weather with low visibility is relatively small, which leads to overfitting in training.

In recent years, several deep-learning-based no-reference image quality assessment (NR-IQA; Kim et al. 2017) models are proposed. Similar to visibility detection, NR-IQA also aims to obtain perceptual picture quality by giving the score with the appropriate deep learning model and it was proved to achieve a good performance.

Based on this intuition, a visibility detection method based on transfer learning, which utilized the DCNN in existing NR-IQA method, is proposed. In this method, high-level image features are extracted with a pretrained DCNN, that is, the deep image quality measure for no-reference IQA (DIQaM-NR) network proposed by Bosse et al. (2018), and the visibility estimation is obtained using a support vector regression (SVR) (Drucker et al. 1997). Then the DCNN is fine-tuned using annotated samples to better fit the problem of visibility detection. Because Some regions of the image are sensitive to various factors, such as monotonous content, occlusion, and specular reflections, which may result in detection errors, we divide each sample image into some subregions and encoded with the DIQaM-NR network. Next, the network’s classification layer is replaced by an SVR to regress encoded image features and visibility, and SVR parameters are trained using visibility-annotated samples. Then the variances of the predicted distribution and the predicted fitting of each regression model are treated as weights to fuse the estimated value of subregions. Finally, the DCNN is fine-tuned using the detection results conversely.

The rest of the paper is organized as follows. In section 2, a mathematical model of the visibility detection problem is presented, and the overall architecture of the proposed method is summarized in a flow diagram. Section 3 introduces the visibility detection method using fine-tune the DIQaM-NR network. And the method used to train the visibility regression model based on SVR is described in section 4 while section 5 introduces the weighted fusion of estimated visibility value of all subregions. The experiments to evaluate the proposed method and the result analysis are described in section 6. The conclusions drawn from the obtained results are included in section 7.

In the following, uppercase represent sets, lowercase boldface characters represent vectors and uppercase boldface sans serif characters represent matrices. Superscript “T” represents the transpose of a matrix and superscript “−1” represents the inverse of a matrix. The subscript “i” represents the sequence number of images and superscript “q” represents the sequence number of the subregion. The υq means the regressed value of the qth subregion due to the coded feature vector mq, and υ indicates the predicted result of fusing υq of each subregion.

2. Model definition

Generally, data-driven visibility detection methods using historical images or video data involve a sample training image set X = {x1, …, xN} with a corresponding visibility-annotated set Y = {y1, …, yN}, and a function f(x) to regress images. The image sample set and corresponding annotated set are used to train visibility detection values by mapping the detected image sample x to the corresponding visibility value. To reduce the impact of the estimation error of partial regions on the overall visibility detection results, the sample image is divided into n same size subregions and n is nine in this paper. Thus, the training set can be represented as X={X1,,Xn}={{x11,,xN1},,{x1n,,xNn}}, and the corresponding visibility-annotated set can be represented as Y={Y1,,Yn}={{y1,,yN},,{y1,,yN}}.

The flow of the proposed method is shown in Fig. 1. The subregion images are encoded by transferring the pretrained DIQaM-NR network to obtain the coded feature set. Then the coded features are used to train the corresponding SVR machine to estimate the visibility value of a given subregion. Finally, the fusion weights of each subregion are calculated based on error analysis of support vector regression, and the final detection result is obtained by fusing the visibility estimates of the subregions with the weights.

Fig. 1.
Fig. 1.

Flow of the proposed method.

Citation: Journal of Atmospheric and Oceanic Technology 36, 10; 10.1175/JTECH-D-19-0025.1

3. Network transfer

Atmospheric conditions can degrade the quality of image features, for example, luminance, contrast, color, and depth of field, which can be used to estimate atmospheric visibility. Existing methods usually extract one or more hand-crafted features. However, these methods do not always consider the various attenuation effects on the image. Due to DCNNs can extract image features with more representation power. Therefore, we use DCNNs to extract image features that reflect the attenuation effect of the atmosphere. However, traditional DNNs require very large training sets with annotation, which are difficult to obtain for visibility detection. In this study, the existing DNNs was transferred and fine-tuned to encode images; that is, the proposed method extracts representative features from images.

Transfer learning is a promising machine learning method that utilizes existing knowledge from one domain to solve problems in different, but related domains (Weiss et al. 2016). Note that the source and target data do not require the same data distribution for extracted features. Being similar to visibility detection by quantifying the degradation of image quality, NR-IQA aimed to predict the perceptual quality of a digital image is without access to its pristine counterpart (Wang and Bovik 2011). Recently, DCNNs have also achieved significant results when introduced into NR-IQA (Amirshahi et al. 2016; Bianco et al. 2018; Scott and Hemami 2018), to extract features of degraded images in different receptive field, which establishes the relationship between image blurring and human visual perception and also indicates the degree of image degradation. Similar to it, the principle of data-driven visibility detection is also trying to find blurring and the degenerative effects of low-visibility weather on images by training a model to map image features to the annotated visibility value. Regarding the feature extraction level, data-driven visibility detection is similar to NR-IQA based on DCNNs. However, NR-IQA can utilize a larger training set compared to that used for visibility detection. Inspired by this, the feature extraction component of the DIQaM-NR network proposed by Bosse et al. (2018) for NR-IQA was adopted for visibility detection to address the problem caused by the lack of large-scale annotated sets. To adapt the DIQaM-NR’s feature encoders to the visibility detection problem, we also use the visibility set with annotation to fine-tune the parameters of the transferred DCNN.

a. Network structure

To extract the features from the sample images, the last max-pooling layer of the original DIQaM-NR’s feature encoders was replaced with a global-pooling layer, and a feature encoding visibility detection network (FE-V network) was constructed. The network structure is shown in Fig. 2. The proposed FE-V network comprises 15 layers, that is, conv3–32 → conv3–32 → max pooling → conv3–64 → conv3–64 → max pooling → conv3–128 → conv3–128 → max pooling → conv3–256 → conv3–256 → max pooling → conv3–512 → conv3–512 → global pooling. Here, the convolutional layer parameters are denoted by “conv⟨receptive field size⟩–⟨number of channels⟩.” All convolutional layers apply 3 × 3 pixel-size convolution kernels and rectified linear unit activation function. To obtain an output of the same size as the input, convolutions are applied with zero padding. All max-pooling layers have 2 × 2 pixel-sized kernels. Images input to the FE-V network are 224 × 224 pixels, and the output is a 512-dimensional feature vector. To train the visibility detection model, the subregion image set X′ is taken as input, and the 512-dimensional feature vector is taken as the coded feature set M={M1,,Mn}={{m11,,mN1},,{m1n,,mNn}}, which encodes the subregion images.

Fig. 2.
Fig. 2.

Architecture of the proposed FE-V network.

Citation: Journal of Atmospheric and Oceanic Technology 36, 10; 10.1175/JTECH-D-19-0025.1

b. Fine-tuning

Generally, the transferred network is only replaced and retrained the classification layer; that is, the feature extraction layers of the pretrained network are retained. Although visibility detection and image quality assessment are somewhat similar, the features used in visibility detection still differ from those used in quality assessment, which may reduce detection accuracy. Thus, part of the feature extraction layers of the FE-V network was fine-tuned in the proposed method.

For different applications, the low-level features of the images, that is, local features, are usually similar, while the high-level features, that is, global features or semantic features, are quite different. Thus, fine-tuning all convolution layers may reduce generalizability and require a large number of train samples. Therefore, we fix the parameters of the convolution layer in lower layers and only tune the convolution layer close to the SVR model. Since the trainable parameters are inherited from the DIQaM-NR network, the gradient descent does not begin from a random initial value. Therefore, fine-tuning can be performed iteratively. Then the high-level convolutional layers can extract features that are more suitable for the problem of visibility detection.

The fine-tuning procedure is described as follows. First, the FE-V network to encode the subregion image is connected to a trained SVR model initiated with the DIQaM-NR network parameters. Then the parameters of convolution layers 1–4 are fixed to inherit the low-level feature extraction ability of the source model, and the parameters of convolution layer 5 are set as trainable. Next, the subregion training sample image set was used to fine-tune the corresponding FE-V network layers for 100 epochs with a learning rate of 0.0001. The fine-tuning diagram of a single subregion is shown in Fig. 3. After fine-tuning, the training sample set was reencoded using the new FE-V network of each subregion. Finally, the SVR model was retrained using the new encoded feature set and corresponding visibility-annotated set.

Fig. 3.
Fig. 3.

Fine-tuning diagram.

Citation: Journal of Atmospheric and Oceanic Technology 36, 10; 10.1175/JTECH-D-19-0025.1

4. Regression model

To map the image features to visibility value, SVR model is adopted to establish the relationship between image-coded features and visibility-annotated values in this paper. SVR is usually used to resolve the nonlinear regression problem with a small training set and can effectively overcome the difficulties resulting from the “curse of dimensionality” and overfitting (Smola and Schölkopf 2004). The coding feature miq of subregion q of image i in the training set is mapped to high-dimensional feature space φ(miq) and the labeled visibility value yi is a dependent variable. So linear regression in high-dimensional space is represented as follows:
yi=f(miq)=ωφ(miq)+b,
where ω is a weight vector, and b is the bias term for the regression model. According to the principle of minimizing structural risks, ω and b can be estimated by minimizing the following objective functions:
R(miq)=12ω2+1Ni=1N|f(miq)yi|ε,
where N is the number of image samples in the training set, and the loss function is |f(miq)yi|. To minimize the Euler norm ω2 and control the fitting error, the relaxation variables {ξi}i=1N and {ξi*}i=1N are introduced, and the optimization problem in Eq. (2) is transformed into the following constraint minimization problem:
{min12ωTω+Ci=1Nξi+Ci=1Nξi*s.t.{ωTφ(miq)+byiε+ξi*yiωTφ(miq)bε+ξiξi,ξi*0,i=1,2,,N.
For simplicity, Lagrange multipliers αi, αi*, ηi, and ηi* are introduced separately. Then Eq. (3) is transformed into a dual problem. And the SVR representation model can be obtained by solving the following dual convex quadratic programming of constrained problems. Then the SVR model is represented as follows:
f(m)=i=1N(αiαi*)Q(miq,m)+b*,
where Q(miq,m) is the kernel function, m is the coded feature vector of the input subregion image, and f(m) is the regressed visibility result. Note that the corresponding regression model for each subregion is trained separately. Commonly used kernels include linear, polynomial, and Gaussian radial basis kernel functions (Eigensatz 2006), and the Gaussian radial basis function is used in the proposed method.

In Eqs. (2) and (3), the variable ε is the error limit for the regression function, which controls the insensitivity of SVR to the data sample and is closely related to sample noise; a too-large ε value will introduce poor prediction accuracy, while a too-small ε value will lead to model overfitting. The variable C is the penalty factor, which reflects the degree of punishment for the training sample; that is, the too-large C value will lead to poor generalization ability of models, and too-small C value will introduce poor prediction accuracy. Based on the considerations above, we utilize these parameters to be weights of fusion of estimated value, and the details are given in section 5.

5. Fusion of estimated visibilities

Some areas of images suffer from various factors, such as monotonous image content, occlusion, reflections, and so on, which introduce significant measurement errors in visibility detection. To obtain stable and precise detection results, the estimation values of all subregions are fused as follows:
V=q=1nυq×ωq,
where V is the fused visibility result, n is the number of subregions, which is nine in this paper, and υq is the visibility for the qth subregion. The ωq is the fusion weight, which reflects the reliability of the estimated visibility of the qth subregion image. To construct the fusion weights, the prediction variance, σq, which is an important parameter describing the influence of data disturbance on the model, is used and then the fusion weight, ωq, is expressed as follows:
ωq=1σqp=1n1σq.

According to the SVR error analysis by Gao et al. (2002), the prediction variance σq of the qth subregion is defined with two variables, that is, the variance of prediction distribution δq and the variance of the fitting βq.

Among them, the variance of prediction distribution δq is used to present the uncertainty of the joint distribution of the data, which can be calculated by covariance matrices K, with the training and test sets as follows:
δq=K(l,l)K(Z,l)TK(Z,l)1K(Z,l),
where Z={{m1q,y1},,{mNq,yN}} is a set comprising the coded feature vector of the qth subregion of the training samples and the corresponding visibility-annotated value. The sample l consists of the coded features of the qth subregion of the test image and its estimated visibility. And K(l,l) is the autocovariance of sample l, then K(Z,l) is the covariance matrix of Z and l, and K(Z,l)T is the transpose of K(Z,l) while K(Z,Z)1 is the inverse of the covariance matrix K(Z,Z). Therein, the covariance matrix K(A,B) is calculated as follows:
K(A,B)u=[k11ku1k1ukuu],
kor=Cov(ho,hr),o,r=1,2,,u,
where u denotes the number of samples in set H (i.e., the combination of sets A and B), ho and hr represent the oth and rth samples in the set H, respectively, and Cov(ho, hr) is the covariance of o and r.
The variance of the fitting βq represents the fitting error caused by the inherent noise in the training set. Here, the variance of the fitting is calculated with the penalty factor Cq and error limit εq estimated after training the SVR as follows:
βq=2(Cq)2+(εq)2(εqCq+3)3(εqCq+1).
Note that the penalty factor Cq and error limit εq were described in section 4. By combining Eqs. (7) and (10), σq is represented as
σq=δq+βq=K(l,l)K(Z,l)TK(Z,Z)1K(Z,l)+2(Cq)2+(εq)2(εqCq+3)3(εqCq+1).

Equation (11) gives the prediction variance of the qth subregion in the test image.

6. Results and analysis

a. Data and equipment

To evaluate the proposed method, several experiments were performed on one server with a 3.3-GHz Intel CPU i9-7900 and memory of 128 GB. The model was a trainer with Libsvm and Chainer, with a real dataset that was acquired by an active camera positioned at a fixed location. The images in the dataset were sampled from the ball videos cameras of the Hikvision iDS-2DF831IX captured consistently from January to December 2014 each hour from 0500 to 1800 LT from three nonoverlapping viewings. Here, the image resolution was 640 × 480 pixels, and the visibility annotation of each image to train was from forward-scattering visibility meter of the Vaisala PWD20 and revised manually. To revise visibility annotation, four observers are invited to measure the visibility value of each image visually. If the error between the observed result and measurement by visibility meter is larger than 10%, the visibility annotation is corrected; otherwise, the observed result is retained. A total of 5511 frames were selected as the dataset, among which 4078 were selected randomly as the training set, and the remaining 1433 frames were used as the test set. And each image was divided into nine subregions and resized to 224 × 224 pixels. To analyze the effectiveness of the method under different ranges of visibility, all visibility sample images were categorized into four levels according to their annotated value with the distribution of samples shown in Table 1. Example images of three views for different levels are shown in Fig. 4. As can be seen, the samples cover different visibility ranges while the number of sample images with low visibility is relatively small, especially the samples with visibility in the range of 0–0.5 km.

Table 1.

Distribution of image samples at different visibility ranges.

Table 1.
Fig. 4.
Fig. 4.

Example images of three views at different levels.

Citation: Journal of Atmospheric and Oceanic Technology 36, 10; 10.1175/JTECH-D-19-0025.1

b. Experimental result

In experiments, the detection results of the example images selected are analyzed and statistical analysis on the distribution of the estimated visibility for the test set is performed first in section 6b(1). Next, the estimated results of visibility and weight on each subregion and the correct rates with different number of the segmented subregions were compared to evaluate the effectiveness of regional segmentation in section 6b(2). After that, to verify the validity of the proposed FE-V network and parameter fine-tuning, the experimental results with different encoding networks, that is, classic Visual Geometry Group (VGG)-16 network, FE-V network without fine-tuning and proposed FE-V network with parameter fine-tuning, are compared for different visibility ranges in section 6b(3). Then the different fusion strategies were also compared in section 6b(4). Finally, the performance of the visibility detection method we proposed is compared with several state-of-the-art methods, that is, luminance piecewise function method (LPF) proposed by Cheng et al.(2018), average Sobel as well as those of the average Sobel gradient method (AS) (Xiang et al. 2013), convolutional neural network (CNN) (Chaabani et al. 2017), and contrast fitting (CF) methods (Yin et al. 2011).

1) Case and statistical analysis

The estimated visibility with our proposed method for the sample images displayed in Fig. 4 is listed in Table 2 accordingly. As can be seen from the table, the detection results are close to the ground truth.

Table 2.

Detection values for image samples in Fig. 4.

Table 2.

To quantify the precision of the results for the test set, a detection result was defined as correct when the detection error was less than 10% of the ground truth; otherwise, it was incorrect. On this basis, the error distribution of results for the entire test set is shown in Fig. 5, where the three straight lines from top to bottom represent 110%, 100%, and 90% of the ground truth value. Therefore, the points with correct detection result should locate between the line l1 and l3. As shown in Fig. 5, the test results are mostly correct.

Fig. 5.
Fig. 5.

Distribution of test results.

Citation: Journal of Atmospheric and Oceanic Technology 36, 10; 10.1175/JTECH-D-19-0025.1

On the basis of the distributions of detection visibility, the mean value, the standard deviation of the estimated value were analyzed with the ground truth and the correct detection rate is given in Table 3. As noticed, the differences in the mean and standard deviations between the estimated and ground truth are small, which indicates that the dispersion degree of estimated visibility using our method is reasonably consistent with the ground truth. The average error of the detection result is within 10% of the average value of the ground truth visibility and the correct detection rate also exceeds 90%, with the exception of the 0–0.5-km interval. This is primarily due to the fact that the image degradation effect is too significant when the visibility is under 0.5 km. In addition, it is difficult to extract meaningful visual features from the image. Nevertheless, the correct detection rate of this range reaches 84%.

Table 3.

Statistical analysis of experimental results.

Table 3.

2) Analysis of regional segmentation

To evaluate the effectiveness of regional segmentation, the estimated visibility and weight of each subregion of view 1 (top panels of Figs. 4a and 4d) are shown in Table 4. As can be seen, the estimated visibilities of areas with rich details such as the subregions 7–9 of the view 1 (Fig. 4a, top) are closer to the ground truth, meanwhile the weights of these subregions are also higher than the areas with plain content such as the subregions 1–3, correspondingly. Meanwhile, to evaluate the influence of the number of the segmented subregions on visibility detection, the correct detection rates with different subregion numbers, that is, 1, 4, 9, and 16 subregions, are listed in Table 5, respectively. As shown in Table 5, the correct detection rates with both 4 and 16 subregions are lower than those with 9 subregions. It is mainly because the number of subregions affects the size of each subregion. And if the size of the segmented subregion is too large, there are too many objects and details with different distances in one subregion, which may increase the detection error. While the objects and details contained in one region are too little if the size of the segmented subregion is too small, which will lead to the poor image gradient and high fluctuation. In summary, dividing each image into nine subregions is more suitable according to the experiment results.

Table 4.

The estimated results of different subregions of view 1.

Table 4.
Table 5.

Correct rates of different number of subregions.

Table 5.

3) Comparison of different networks

To evaluate the proposed FE-V network with parameter fine-tuning, existing networks such as VGG-16 (Simonyan and Zisserman 2014), FE-V network without fine-tuning, and proposed FE-V network with parameter fine-tuning were selected to compare. Among them, VGG-16 network was trained with ImageNet dataset and extracted a 1000-dimensional feature vector after three fully connected layers as an encoding feature, and the FE-V network utilized the original network to extract a 512-dimensional feature vector as an encoding vector feature. The correct detection rate of each visibility range for the three networks is listed in Table 6. Although the correct rate of VGG-16 is above 90%, it works poorly in the low-visibility range due to the lack of low-visibility images in the ImageNet dataset. Thus, the representation power of the feature is weaker when visibility is in low range. The correct rate is improved by the original FE-V with the initial state because the FE-V network trained with multiple fuzzy images is more sensitive to image attenuation and can effectively extract different levels of features from a blurred image. The best results are acquired with the fine-tuned FE-V we proposed, particularly in the low-visibility range, and the detection accuracy of this network was improved remarkably. In addition, this network also demonstrated greater stability in different visibility ranges. This is due to the fact that the training set used to fine-tune parameters contains more image samples with low visibility, and the model is more sensitive to the image degradation effect caused by changes in visibility.

Table 6.

Comparison of experimental results of different networks.

Table 6.

4) Comparison of different fusion strategies

To evaluate the proposed fusion strategy, several classic fusion strategies, that is, random subregion selection (random value), overall image resizing (overall resizing), average subregion image estimation (average value), and our fusion strategy (weight fusion), were selected for comparison. For the random value strategy, one subregion of the test image was randomly selected, and its estimated result was considered as that of the entire image. For the overall resizing strategy, the test image was resized to the input resolution of the learning model, and the estimated visibility was considered as the entire image. For the average value strategy, the detection results of all subregions were averaged as the final detection result. Consequently, the results of the comparison on different fusion strategies are shown in Table 7. As can be seen from the table, the random subregion selection strategy is the worst in all, and the overall image resizing strategy also works poorly, mainly because the content of the part images affects the detection result. The average estimation of all subregions works better because it can smooth excessive estimation errors of the part regions. However, when the visibility is low, for example, the visibility is in the 0–0.5-km range, the detection error with the average value strategy increases significantly, with low stability. In general, the weight fusion strategy we proposed obtains better performance and stability by fusing local estimations of subrange images effectively with the variance of prediction distribution and the variance of the fitting.

Table 7.

Correct rates of different fusion strategies.

Table 7.

5) Comparison of different methods

The proposed method was compared to four representative detection methods, that is, the LPF, AS, CNN, and CF methods, in which the LPF and AS methods are model-driven methods, and the CNN and CF methods are data-driven methods. The CNN method only estimates the visibility range, and the estimation result is considered to be correct when it is in the right range. The compared results of different methods are shown in Table 8. As can be seen, the data-driven methods (CNN, CF, and our proposed method) generally perform better than the model-driven methods (LPF and AS), especially for bad weather with low visibility, which is primarily because it is difficult to reflect the influence of refraction and light scattering caused by different particles in the atmosphere on visibility precisely according to a monotonous physical influence model, and the calculation of atmospheric transmittance is inaccurate with model-driven methods. Though the CNN and CF methods perform better due to sufficient training data for high visibilities, if the number of training samples with low visibility is small, the model is more likely to overfit the data, which leads to poor detection correct rate for low-visibility test samples. While the correct detection rate of each visibility range was improved to a certain extent with our proposed method. Note that the CNN method demonstrates a higher correct detection rate than the proposed method when visibility is high; this is mainly because the CNN method only needs to predict the correct range, which makes it less applicable than the proposed method. By transferring features in isomorphic space, the proposed method makes it possible to solve the detection problem with low-visibility images. In addition, the proposed subregion fusion strategy has significantly improved the overall correct detection rate (93.86%). Generally, the proposed method is superior to traditional model-driven and data-driven methods and is more suitable for visibility detection applications.

Table 8.

Comparison of experimental results with different methods.

Table 8.

6) Performance evaluation

To evaluate the real-time performance and feasibility of the proposed model, the training time was indicated and the calculation time of several methods was compared. The nine SVR models and FE-V networks were trained for subregions in the proposed method. The process of training SVR models and fine-tuning took about 2 and 16 h, respectively. The test time of 100 frames with the server mentioned in section 6a is shown in Table 9. As can be seen from Tables 8 and 9, the model-driven methods (LPF and AS) and CF have great advantages in test time than the rest of data-driven methods (CNN and our proposed method), but the correct detection rate is low. The test time of CNN is appropriate, while only the correct range can be predicted with the method. Although the proposed method takes a longer time, it is within an acceptable range and can meet the requirement of daily business appropriately.

Table 9.

Comparison of test time with different methods.

Table 9.

7. Conclusions

In this paper, a visibility detection method based on transfer learning was proposed, in which an FE-V network, transferred from the DCNN used in the no-reference image quality assessment application, was adopted to extract representative features from subregions of one image to atmospheric visibility detection, and then fine-tuned with the estimated result and annotated visibility conversely. And a support vector regression model was trained to map the encoded features to visibility estimation, then the visibility estimations of all subregions were fused with the weights according to regression model parameters. Comparing with the existing methods, the proposed method does not need to define a precise physical model and eliminates the requirement of large-scale visibility-annotated sets; it also reduces training complexity, because the pretrained neural network is fine-tuned instead of training from scratch. The experimental results indicated that the proposed method demonstrates superior performance and can be implemented in ground observation.

Acknowledgments

We thank the anonymous reviewers for their valuable comments. This work is supported by National Natural Science Foundation of China (Grant 41305138), the China Postdoctoral Science Foundation (Grant 2017M621700), and the National Key Research and Development Program of China (Grant 2018YFC1507604).

REFERENCES

  • American Meteorological Society, 2019: “Visibility.” Glossary of Meteorology, http://glossary.ametsoc.org/wiki/Visibility.

  • Amirshahi, S. A., M. Pedersen, and S. X. Yu, 2016: Image quality assessment by comparing CNN features between images. J. Imaging Sci. Technol., 60, 60410, https://doi.org/10.2352/J.ImagingSci.Technol.2016.60.6.060410.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Babari, R., N. Hautière, E. Dumont, R. Brémond, and N. Paparoditis, 2011a: A model-driven approach to estimate atmospheric visibility with ordinary cameras. Atmos. Environ., 45, 53165324, https://doi.org/10.1016/j.atmosenv.2011.06.053.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Babari, R., N. Hautière, E. Dumont, J. P. Papelard, and N. Paparoditis, 2011b: Computer vision for the remote sensing of atmospheric visibility. Int. Conf. on Computer Vision Workshops, Barcelona, Spain, IEEE, 219–226, https://doi.org/10.1109/ICCVW.2011.6130246.

    • Search Google Scholar
    • Export Citation
  • Bianco, S., L. Celona, P. Napoletano, and R. Schettini, 2018: On the use of deep learning for blind image quality assessment. Signal Image Video Process., 12, 355362, https://doi.org/10.1007/s11760-017-1166-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bosse, S., D. Maniry, K. R. Müller, T. Wiegand, and W. Samek, 2018: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process., 27, 206219, https://doi.org/10.1109/TIP.2017.2760518.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chaabani, H., F. Kamoun, H. Bargaoui, and F. Outay, 2017: A neural network approach to visibility range estimation under foggy weather conditions. Procedia Comput. Sci., 113, 466471, https://doi.org/10.1016/j.procs.2017.08.304.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cheng, X., B. Yang, G. Liu, T. Olofsson, and H. Li, 2018: A variational approach to atmospheric visibility estimation in the weather of fog and haze. Sustainable Cities Soc., 39, 215224, https://doi.org/10.1016/j.scs.2018.02.001.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drucker, H., C. J. Burges, L. Kaufman, A. J. Smola, and V. Vapnik, 1997: Support vector regression machines. Advances in Neural Information Processing Systems, MIT Press, 155161.

    • Search Google Scholar
    • Export Citation
  • Eigensatz, M., 2006: Insights into the geometry of the Gaussian kernel and an application in geometric modeling. M.S. thesis, Swiss Federal Institute of Technology Zurich, 55 pp.

  • Gao, J. B., S. R. Gunn, C. J. Harris, and M. Brown, 2002: A probabilistic framework for SVM regression and error bar estimation. Mach. Learn., 46, 7189, https://doi.org/10.1023/A:1012494009640.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Graves, N., and S. Newsam, 2011: Using visibility cameras to estimate atmospheric light extinction. Workshop on Applications of Computer Vision, Kona, HI, IEEE, 577–584, https://doi.org/10.1109/WACV.2011.5711556.

    • Search Google Scholar
    • Export Citation
  • Hautière, N., R. Labayrade, and D. Aubert, 2006: Estimation of the visibility distance by stereovision: A generic approach. IEICE Trans. Inf. Syst., 89, 20842091, https://doi.org/10.1093/ietisy/e89-d.7.2084.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hautière, N., R. Babari, É. Dumont, R. Brémond, and N. Paparoditis, 2010: Estimating meteorological visibility using cameras: A probabilistic model-driven approach. Asian Conf. on Computer Vision, Queenstown, New Zealand, Asian Federation of Computer Vision, 243–254, https://doi.org/10.1007/978-3-642-19282-1_20.

    • Search Google Scholar
    • Export Citation
  • Kim, J., H. Zeng, D. Ghadiyaram, S. Lee, L. Zhang, and A. C. Bovik, 2017: Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment. IEEE Signal Process. Mag., 34, 130141, https://doi.org/10.1109/MSP.2017.2736018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmi, C. R., D. T. Rao, and G. V. S. Rao, 2017: Fog detection and visibility enhancement under partial machine learning approach. Int. Conf. on Power, Control, Signals and Instrumentation Engineering, Chennai, India, IEEE, 1192–1194, https://doi.org/10.1109/ICPCSI.2017.8391898.

    • Search Google Scholar
    • Export Citation
  • Li, Q., Y. Li, and B. Xie, 2019: Single image-based scene visibility estimation. IEEE Access, 7, 24 43024 439, https://doi.org/10.1109/ACCESS.2019.2894658.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liaw, J. J., S. B. Lian, Y. F. Huang, and R. C. Chen, 2009: Atmospheric visibility monitoring using digital image analysis techniques. Int. Conf. on Computer Analysis of Images and Patterns, Münster, Germany, University of Salerno, 1204–1211, https://doi.org/10.1007/978-3-642-03767-2_146.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, C. H., C. Y. Wen, C. S. Yuan, J. J. Liaw, C. C. Lo, and S. H. Chiu, 2005: Investigation of urban atmospheric visibility by high-frequency extraction: Model development and field test. Atmos. Environ., 39, 25452552, https://doi.org/10.1016/j.atmosenv.2005.01.023.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Malm, W., S. Cismoski, A. Prenni, and M. Peters, 2018: Use of cameras for monitoring visibility impairment. Atmos. Environ., 175, 167183, https://doi.org/10.1016/j.atmosenv.2017.12.005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miclea, R. C., and I. Silea, 2015: Visibility detection in foggy environment. 20th Int. Conf. on Control Systems and Computer Science, Bucharest, Romania, IEEE, 959–964, https://doi.org/10.1109/CSCS.2015.56.

    • Search Google Scholar
    • Export Citation
  • Pokhrel, R., and H. Lee, 2011: Algorithm development of a visibility monitoring technique using digital image analysis. Asian J. Atmos. Environ., 5, 820, https://doi.org/10.5572/ajae.2011.5.1.008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Scott, E. T., and S. S. Hemami, 2018: No-reference utility estimation with a convolutional neural network. Electron. Imaging, 9, 202, https://doi.org/10.2352/ISSN.2470-1173.2018.09.IRIACV-202.

    • Search Google Scholar
    • Export Citation
  • Simonyan, K., and A. Zisserman, 2014: Very deep convolutional networks for large-scale image recognition. arXiv, https://arxiv.org/abs/1409.1556.

  • Smola, A. J., and B. Schölkopf, 2004: A tutorial on support vector regression. Stat. Comput., 14, 199222, https://doi.org/10.1023/B:STCO.0000035301.49549.88.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Z., and A. C. Bovik, 2011: Reduced- and no-reference image quality assessment. IEEE Signal Process. Mag., 28, 2940, https://doi.org/10.1109/MSP.2011.942471.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weiss, K., T. M. Khoshgoftaar, and D. Wang, 2016: A survey of transfer learning. J. Big Data, 3, 9, https://doi.org/10.1186/S40537-016-0043-6.

    • Crossref
    • Export Citation
  • Xiang, W., J. Xiao, C. Wang, and Y. Liu, 2013: A new model for daytime visibility index estimation fused average Sobel gradient and dark channel ratio. Third Int. Conf. on Computer Science and Network Technology, Dalian, China, IEEE, 109–112, https://doi.org/10.1109/ICCSNT.2013.6967074.

    • Search Google Scholar
    • Export Citation
  • Xie, L., A. Chiu, and S. Newsam, 2008: Estimating atmospheric visibility using general-purpose cameras. Fourth Int. Symp. on Visual Computing, Las Vegas, NV, University of Nevada, 356–367, https://doi.org/10.1007/978-3-540-89646-3_35.

    • Search Google Scholar
    • Export Citation
  • Yin, X. C., T. T. He, H. W. Hao, X. Xu, X. Z. Cao, and Q. Li, 2011: Learning based visibility measuring with images. 18th Int. Conf. on Neural Information Processing, Shanghai, China, Asia Pacific Neural Network Assembly, 711–718, https://doi.org/10.1007/978-3-642-24965-5_80.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save
  • American Meteorological Society, 2019: “Visibility.” Glossary of Meteorology, http://glossary.ametsoc.org/wiki/Visibility.

  • Amirshahi, S. A., M. Pedersen, and S. X. Yu, 2016: Image quality assessment by comparing CNN features between images. J. Imaging Sci. Technol., 60, 60410, https://doi.org/10.2352/J.ImagingSci.Technol.2016.60.6.060410.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Babari, R., N. Hautière, E. Dumont, R. Brémond, and N. Paparoditis, 2011a: A model-driven approach to estimate atmospheric visibility with ordinary cameras. Atmos. Environ., 45, 53165324, https://doi.org/10.1016/j.atmosenv.2011.06.053.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Babari, R., N. Hautière, E. Dumont, J. P. Papelard, and N. Paparoditis, 2011b: Computer vision for the remote sensing of atmospheric visibility. Int. Conf. on Computer Vision Workshops, Barcelona, Spain, IEEE, 219–226, https://doi.org/10.1109/ICCVW.2011.6130246.

    • Search Google Scholar
    • Export Citation
  • Bianco, S., L. Celona, P. Napoletano, and R. Schettini, 2018: On the use of deep learning for blind image quality assessment. Signal Image Video Process., 12, 355362, https://doi.org/10.1007/s11760-017-1166-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bosse, S., D. Maniry, K. R. Müller, T. Wiegand, and W. Samek, 2018: Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process., 27, 206219, https://doi.org/10.1109/TIP.2017.2760518.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chaabani, H., F. Kamoun, H. Bargaoui, and F. Outay, 2017: A neural network approach to visibility range estimation under foggy weather conditions. Procedia Comput. Sci., 113, 466471, https://doi.org/10.1016/j.procs.2017.08.304.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cheng, X., B. Yang, G. Liu, T. Olofsson, and H. Li, 2018: A variational approach to atmospheric visibility estimation in the weather of fog and haze. Sustainable Cities Soc., 39, 215224, https://doi.org/10.1016/j.scs.2018.02.001.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drucker, H., C. J. Burges, L. Kaufman, A. J. Smola, and V. Vapnik, 1997: Support vector regression machines. Advances in Neural Information Processing Systems, MIT Press, 155161.

    • Search Google Scholar
    • Export Citation
  • Eigensatz, M., 2006: Insights into the geometry of the Gaussian kernel and an application in geometric modeling. M.S. thesis, Swiss Federal Institute of Technology Zurich, 55 pp.

  • Gao, J. B., S. R. Gunn, C. J. Harris, and M. Brown, 2002: A probabilistic framework for SVM regression and error bar estimation. Mach. Learn., 46, 7189, https://doi.org/10.1023/A:1012494009640.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Graves, N., and S. Newsam, 2011: Using visibility cameras to estimate atmospheric light extinction. Workshop on Applications of Computer Vision, Kona, HI, IEEE, 577–584, https://doi.org/10.1109/WACV.2011.5711556.

    • Search Google Scholar
    • Export Citation
  • Hautière, N., R. Labayrade, and D. Aubert, 2006: Estimation of the visibility distance by stereovision: A generic approach. IEICE Trans. Inf. Syst., 89, 20842091, https://doi.org/10.1093/ietisy/e89-d.7.2084.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hautière, N., R. Babari, É. Dumont, R. Brémond, and N. Paparoditis, 2010: Estimating meteorological visibility using cameras: A probabilistic model-driven approach. Asian Conf. on Computer Vision, Queenstown, New Zealand, Asian Federation of Computer Vision, 243–254, https://doi.org/10.1007/978-3-642-19282-1_20.

    • Search Google Scholar
    • Export Citation
  • Kim, J., H. Zeng, D. Ghadiyaram, S. Lee, L. Zhang, and A. C. Bovik, 2017: Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment. IEEE Signal Process. Mag., 34, 130141, https://doi.org/10.1109/MSP.2017.2736018.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmi, C. R., D. T. Rao, and G. V. S. Rao, 2017: Fog detection and visibility enhancement under partial machine learning approach. Int. Conf. on Power, Control, Signals and Instrumentation Engineering, Chennai, India, IEEE, 1192–1194, https://doi.org/10.1109/ICPCSI.2017.8391898.

    • Search Google Scholar
    • Export Citation
  • Li, Q., Y. Li, and B. Xie, 2019: Single image-based scene visibility estimation. IEEE Access, 7, 24 43024 439, https://doi.org/10.1109/ACCESS.2019.2894658.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liaw, J. J., S. B. Lian, Y. F. Huang, and R. C. Chen, 2009: Atmospheric visibility monitoring using digital image analysis techniques. Int. Conf. on Computer Analysis of Images and Patterns, Münster, Germany, University of Salerno, 1204–1211, https://doi.org/10.1007/978-3-642-03767-2_146.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Luo, C. H., C. Y. Wen, C. S. Yuan, J. J. Liaw, C. C. Lo, and S. H. Chiu, 2005: Investigation of urban atmospheric visibility by high-frequency extraction: Model development and field test. Atmos. Environ., 39, 25452552, https://doi.org/10.1016/j.atmosenv.2005.01.023.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Malm, W., S. Cismoski, A. Prenni, and M. Peters, 2018: Use of cameras for monitoring visibility impairment. Atmos. Environ., 175, 167183, https://doi.org/10.1016/j.atmosenv.2017.12.005.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miclea, R. C., and I. Silea, 2015: Visibility detection in foggy environment. 20th Int. Conf. on Control Systems and Computer Science, Bucharest, Romania, IEEE, 959–964, https://doi.org/10.1109/CSCS.2015.56.

    • Search Google Scholar
    • Export Citation
  • Pokhrel, R., and H. Lee, 2011: Algorithm development of a visibility monitoring technique using digital image analysis. Asian J. Atmos. Environ., 5, 820, https://doi.org/10.5572/ajae.2011.5.1.008.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Scott, E. T., and S. S. Hemami, 2018: No-reference utility estimation with a convolutional neural network. Electron. Imaging, 9, 202, https://doi.org/10.2352/ISSN.2470-1173.2018.09.IRIACV-202.

    • Search Google Scholar
    • Export Citation
  • Simonyan, K., and A. Zisserman, 2014: Very deep convolutional networks for large-scale image recognition. arXiv, https://arxiv.org/abs/1409.1556.

  • Smola, A. J., and B. Schölkopf, 2004: A tutorial on support vector regression. Stat. Comput., 14, 199222, https://doi.org/10.1023/B:STCO.0000035301.49549.88.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Z., and A. C. Bovik, 2011: Reduced- and no-reference image quality assessment. IEEE Signal Process. Mag., 28, 2940, https://doi.org/10.1109/MSP.2011.942471.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weiss, K., T. M. Khoshgoftaar, and D. Wang, 2016: A survey of transfer learning. J. Big Data, 3, 9, https://doi.org/10.1186/S40537-016-0043-6.

    • Crossref
    • Export Citation
  • Xiang, W., J. Xiao, C. Wang, and Y. Liu, 2013: A new model for daytime visibility index estimation fused average Sobel gradient and dark channel ratio. Third Int. Conf. on Computer Science and Network Technology, Dalian, China, IEEE, 109–112, https://doi.org/10.1109/ICCSNT.2013.6967074.

    • Search Google Scholar
    • Export Citation
  • Xie, L., A. Chiu, and S. Newsam, 2008: Estimating atmospheric visibility using general-purpose cameras. Fourth Int. Symp. on Visual Computing, Las Vegas, NV, University of Nevada, 356–367, https://doi.org/10.1007/978-3-540-89646-3_35.

    • Search Google Scholar
    • Export Citation
  • Yin, X. C., T. T. He, H. W. Hao, X. Xu, X. Z. Cao, and Q. Li, 2011: Learning based visibility measuring with images. 18th Int. Conf. on Neural Information Processing, Shanghai, China, Asia Pacific Neural Network Assembly, 711–718, https://doi.org/10.1007/978-3-642-24965-5_80.

    • Crossref
    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 926 241 31
PDF Downloads 854 182 23