Comparison of Visual Features for Image-Based Visibility Detection

Rong Tang aCollege of Meteorology and Oceanography, National University of Defense Technology, Changsha, China

Search for other papers by Rong Tang in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-9530-4925
,
Qian Li aCollege of Meteorology and Oceanography, National University of Defense Technology, Changsha, China

Search for other papers by Qian Li in
Current site
Google Scholar
PubMed
Close
, and
Shaoen Tang aCollege of Meteorology and Oceanography, National University of Defense Technology, Changsha, China

Search for other papers by Shaoen Tang in
Current site
Google Scholar
PubMed
Close
Restricted access

We are aware of a technical issue preventing figures and tables from showing in some newly published articles in the full-text HTML view.
While we are resolving the problem, please use the online PDF version of these articles to view figures and tables.

Abstract

The image-based visibility detection methods have been one of the active research issues in surface meteorological observation. The visual feature extraction is the basis of these methods, and its effectiveness has become a key factor in accurately estimating visibility. In this study, we compare and analyze the effectiveness of various visual features in visibility detection from three aspects, namely, visibility sensitivity, environmental variables robustness, and object depth sensitivity in multiscene, including three traditional visual features such as local binary patterns (LBP), histograms of oriented gradients (HOG), and contrast as well as three deep learned features extracted from the Neural Image Assessment (NIMA) and VGG-16 networks. Then the support vector regression (SVR) models, which are used to map visual features to visibility, are also trained, respectively based on the region of interest (ROI) and the whole image of each scene. The experiment results show that compared to traditional visual features, deep learned features exhibit better performance in both feature analysis and model training. In particular, NIMA, with lower dimensionality, achieves the best fitting effect and therefore is found to show good application prospects in visibility detection.

Significance Statement

The visual feature extraction is a basic step for image-based visibility detection and significantly affects the detection performance. In this paper, we compare six candidate visual features, including traditional and deep learned features, from visibility sensitivity, environmental variables robustness, and object depth sensitivity in multiscene. Then the SVR models are also trained to construct the mapping relations between different kinds of features and the visibility of each scene. The experiment results show that the deep learned features exhibit better performance in both feature analysis and model training, especially NIMA achieves the best fitting performance with fewer feature dimensions.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Qian Li, public_liqian@163.com

Abstract

The image-based visibility detection methods have been one of the active research issues in surface meteorological observation. The visual feature extraction is the basis of these methods, and its effectiveness has become a key factor in accurately estimating visibility. In this study, we compare and analyze the effectiveness of various visual features in visibility detection from three aspects, namely, visibility sensitivity, environmental variables robustness, and object depth sensitivity in multiscene, including three traditional visual features such as local binary patterns (LBP), histograms of oriented gradients (HOG), and contrast as well as three deep learned features extracted from the Neural Image Assessment (NIMA) and VGG-16 networks. Then the support vector regression (SVR) models, which are used to map visual features to visibility, are also trained, respectively based on the region of interest (ROI) and the whole image of each scene. The experiment results show that compared to traditional visual features, deep learned features exhibit better performance in both feature analysis and model training. In particular, NIMA, with lower dimensionality, achieves the best fitting effect and therefore is found to show good application prospects in visibility detection.

Significance Statement

The visual feature extraction is a basic step for image-based visibility detection and significantly affects the detection performance. In this paper, we compare six candidate visual features, including traditional and deep learned features, from visibility sensitivity, environmental variables robustness, and object depth sensitivity in multiscene. Then the SVR models are also trained to construct the mapping relations between different kinds of features and the visibility of each scene. The experiment results show that the deep learned features exhibit better performance in both feature analysis and model training, especially NIMA achieves the best fitting performance with fewer feature dimensions.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Qian Li, public_liqian@163.com

Supplementary Materials

    • Supplemental Materials (PDF 941 KB)
Save
  • Abdi, H., 2011: Coefficient of variation. Encyclopedia of Statistics in Behavioral Science, Wiley, 169171, https://doi.org/10.1002/0470013192.bsa107.

    • Search Google Scholar
    • Export Citation
  • Choi, L. K., J. You, and A. C. Bovik, 2015: Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process., 24, 38883901, https://doi.org/10.1109/TIP.2015.2456502.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dalal, N., and B. Triggs, 2005: Histograms of oriented gradients for human detection. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, San Diego, CA, IEEE, 886893, https://doi.org/10.1109/CVPR.2005.177.

    • Search Google Scholar
    • Export Citation
  • Drucker, H., C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik, 1997: Support vector regression machines. Proc. Ninth Int. Conf. on Neural Information Processing Systems, Denver, CO, ACM, 155161, https://dl.acm.org/doi/10.5555/2998981.2999003.

    • Search Google Scholar
    • Export Citation
  • Fan, D., D. Cao, G. Zhu, and K. Xiao, 2019: Comparative analysis of runway visual range of atmospheric transmission meter and forward scattering meter in low visibility conditions. Desert Oasis Meteor., 13, 5863.

    • Search Google Scholar
    • Export Citation
  • Farnè, M., 1977: Brightness as an indicator to distance: Relative brightness per se or contrast with the background? Perception, 6, 287293, https://doi.org/10.1068/p060287.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Feng, K. P., and F. Yuan, 2014 : Static hand gesture recognition based on HOG characters and support vector machines. Int. Symp. on Instrumentation and Measurement, Sensor Network and Automation, Toronto, ON, Canada, IEEE, https://doi.org/10.1109/IMSNA.2013.6743432.

    • Search Google Scholar
    • Export Citation
  • Hautiére, N., R. Labayrade, and D. Aubert, 2006: Real-time disparity contrast combination for onboard estimation of the visibility distance. IEEE Trans. Intell. Transp. Syst., 7, 201212, https://doi.org/10.1109/TITS.2006.874682.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hautiére, N., R. Babari, and E. Dumont, 2010: Estimating meteorological visibility using cameras: A probabilistic model-driven approach. 10th Asian Conf. on Computer Vision, Queenstown, New Zealand, AFCV, 243254, https://doi.org/10.1007/978-3-642-19282-1_20.

    • Search Google Scholar
    • Export Citation
  • Horvath, H., 1981: Atmospheric visibility. Atmos. Environ., 15, 17851796, https://doi.org/10.1016/0004-6981(81)90214-6.

  • Huang, W., G. Li, Q. Chen, M. Ju, and J. Qu, 2021: CF2PN: A cross-scale feature fusion pyramid network based remote sensing target detection. Remote Sens., 13, 847, https://doi.org/10.3390/rs13050847.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jiang, Y. T., C. M. Sun, Y. Zhao, and L. Yang, 2017: Fog density estimation and image defogging based on surrogate modeling for optical depth. IEEE Trans. Image Process., 26, 33973409, https://doi.org/10.1109/TIP.2017.2700720.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jun, B., I. Choi, and D. Kim, 2013: Local transform features and hybridization for accurate face and human detection. IEEE Trans. Pattern Anal. Mach. Intell., 35, 14231436, https://doi.org/10.1109/TPAMI.2012.219.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kaur, T., and T. K. Gandhi, 2019: Automated brain image classification based on VGG-16 and transfer learning. Int. Conf. on Information Technology, Bhubaneswar, India, IEEE, 9498, https://doi.org/10.1109/ICIT48102.2019.00023.

    • Search Google Scholar
    • Export Citation
  • Kwon, T. M., 2004: Atmospheric visibility measurements using video cameras: Relative visibility. University of Minnesota Duluth Tech. Rep., 44 pp.

    • Search Google Scholar
    • Export Citation
  • Li, J., J. Yan, D. Deng, W. Shi, and S. Deng, 2017: No-reference image quality assessment based on hybrid model. Signal Image Video Process., 11, 985992, https://doi.org/10.1007/s11760-016-1048-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., W. L. Lo, H. Fu, and H. S. H. Chung, 2021: A transfer learning method for meteorological visibility estimation based on feature fusion method. Appl. Sci., 11, 997, https://doi.org/10.3390/app11030997.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, Q., S. Tang, X. Peng, and Q. Ma, 2019: A method of visibility detection based on the transfer learning. J. Atmos. Oceanic Technol., 36, 19451956, https://doi.org/10.1175/JTECH-D-19-0025.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, Y., Y. He, and M. Zhang, 2020: Prediction of Chinese energy structure based on convolutional neural network-long short-term memory (CNN-LSTM). Energy Sci. Eng., 8, 26802689, https://doi.org/10.1002/ese3.698.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, B., S. Huang, R. Wu, and P. Fu, 2020: Implementation method of SVR algorithm in resource-constrained platform. Advances in Intelligent Information Hiding and Multimedia Signal Processing, Springer, 8593, https://doi.org/10.1007/978-981-13-9710-3_9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lo, W. L., M. Zhu, and H. Fu, 2020: Meteorology visibility estimation by using multi-support vector regression method. J. Adv. Inf. Technol., 11, 4047, https://doi.org/10.12720/jait.11.2.40-47.

    • Search Google Scholar
    • Export Citation
  • Ngo, D., G.-D. Lee, and B. Kang, 2021: Haziness degree evaluator: A knowledge-driven approach for haze density estimation. Sensors, 21, 3896, https://doi.org/10.3390/s21113896.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ojala, T., M. Pietikainen, and D. Harwood, 1996: A comparative study of texture measures with classification based on feature distributions. Pattern Recognit., 29, 5159, https://doi.org/10.1016/0031-3203(95)00067-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Outay, F., B. Taha, H. Chaabani, F. Kamoun, N. Werghi, and A. Yasar, 2021: Estimating ambient visibility in the presence of fog: A deep convolutional neural network approach. Pers. Ubiquitous Comput., 25, 5162, https://doi.org/10.1007/s00779-019-01334-w.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pan, Z., J. Wang, Z. Shen, X. Chen, and M. Li, 2019: Multi-layer convolutional features concatenation with semantic feature selector for vein recognition. IEEE Access, 7, 90 60890 619, https://doi.org/10.1109/ACCESS.2019.2927230.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qin, H., and H. Qin, 2021: An end-to-end traffic visibility regression algorithm. IEEE Access, 10, 25 44825 454, https://doi.org/10.1109/ACCESS.2021.3101323.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shi, Y., B. Wang, and F. Bu, 2018: Atmospheric visibility measurement based on image feature. J. Nanjing Univ. Sci. Tech., 42, 552559, https://doi.org/10.14177/j.cnki.32-1397n.2018.42.05.007.

    • Search Google Scholar
    • Export Citation
  • Simonyan, K., and A. Zisserman, 2014: Very deep convolutional networks for large-scale image recognition. arXiv, 1409.1556, https://doi.org/10.48550/arXiv.1409.1556.

    • Search Google Scholar
    • Export Citation
  • Talebi, H., and P. Milanfar, 2017: NIMA: Neural image assessment. IEEE Trans. Image Process., 27, 39984011, https://doi.org/10.1109/TIP.2018.2831899.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tang, S., Q. Li, D. Gu, and J. Jing, 2018: A method of visibility detection based on multiple regression. Inf. Technol. Network Secur., 37, 7073.

    • Search Google Scholar
    • Export Citation
  • Torrione, P. A., K. D. Morton, R. Sakaguchi, and L. M. Collins, 2014: Histograms of oriented gradients for landmine detection in ground-penetrating radar data. IEEE Trans. Geosci. Remote Sens., 52, 15391550, https://doi.org/10.1109/TGRS.2013.2252016.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Y., T. Jiang, S. Ma, and W. Gao, 2011: Image quality assessment based on local orientation distributions. 28th Picture Coding Symp., Nagoya, Japan, IEEE, 274277, https://doi.org/10.1109/PCS.2010.5702485.

    • Search Google Scholar
    • Export Citation
  • Wei, Y., Q. Tian, J. H. Guo, W. Huang, and J. D. Cao, 2019: Multi-vehicle detection algorithm through combining Harr and HOG features. Math. Comput. Simul., 155, 130145, https://doi.org/10.1016/j.matcom.2017.12.011.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xu, X., X. C. Yin, Y. Li, H. W. Hao, and X. Z. Cao, 2013: Visibility measurement with image understanding. Pattern Recognit. Artif. Intell., 26, 543551, https://doi.org/10.3969/j.issn.1003-6059.2013.06.005.

    • Search Google Scholar
    • Export Citation
  • Yin, X. C., T. T. He, H. W. Hao, X. Xu, and Q. Li, 2011: Learning based visibility measuring with images. Int. Conf. on Neural Information Processing, Shanghai, China, IEEE, 711718, https://doi.org/10.1007/978-3-642-24965-5_80.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • You, Y., C. Lu, W. Wang, and C. Tang, 2018: Relative CNN-RNN: Learning relative atmospheric visibility from images. IEEE Trans. Image Process., 28, 4555, https://doi.org/10.1109/TIP.2018.2857219.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yue, G., C. Hou, K. Gu, and N. Ling, 2017: No reference image blurriness assessment with local binary patterns. J. Vis. Commun. Image Representation, 49, 382391, https://doi.org/10.1016/j.jvcir.2017.09.011.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, B., Y. Gao, S. Zhao, and J. Liu, 2010: Local derivative pattern versus local binary pattern: Face recognition with high-order local pattern descriptor. IEEE Trans. Image Process., 19, 533544, https://doi.org/10.1109/TIP.2009.2035882.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, W. Y., and J. Y. Yuan, 2007: Principles and Methods of Atmospheric Detection. China Meteorological Press, 2528.

  • Zhao, Y., X. Ji, and Z. Liu, 2020: Blind image quality assessment based on statistics features and perceptual features. J. Intell. Fuzzy Syst., 38, 35153526, https://doi.org/10.3233/JIFS-190998.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhou, H., and G. Yu, 2021: Research on pedestrian detection technology based on the SVM classifier trained by HOG and LTP features. Future Gener. Comput. Syst., 125, 604615, https://doi.org/10.1016/j.future.2021.06.016.

    • Crossref
    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 329 0 0
Full Text Views 1367 1098 447
PDF Downloads 448 227 6