• Armanious, K., S. Abdulatif, F. Aziz, U. Schneider, and B. Yang, 2019: An adversarial super-resolution remedy for radar design trade-offs. arXiv, https://arxiv.org/abs/1903.01392.

    • Crossref
    • Export Citation
  • Baker, S., and T. Kanade, 1999: Super resolution optical flow. Carnegie Mellon University Robotics Institute Tech. Rep. CMU-RI-TR-99-36, 13 pp.

  • Bharadwaj, N., 2009: Networked radar systems: Waveforms, signal processing and retrievals for volume targets. Ph.D. thesis, Colorado State University, 170 pp.

  • Bringi, V. N., and V. Chandrasekar, 2001: Polarimetric Doppler Weather Radar: Principles and Applications. Cambridge University Press, 636 pp.

    • Crossref
    • Export Citation
  • Dong, C., C. Loy, K. He, and X. Tang, 2014: Image super-resolution using deep convolutional networks. arXiv, https://arxiv.org/abs/1501.00092.

  • Gao, J., B. Deng, Y. Qin, H. Wang, and X. Li, 2017: Enhanced radar imaging using a complex-valued convolutional neural network. arXiv, https://arxiv.org/abs/1712.10096.

  • Haung, G., Z. Liu, K. Weinberger, and L. Van der Maaten, 2017: Densely connected convolutional networks. arXiv, https://arxiv.org/abs/1608.06993.

    • Crossref
    • Export Citation
  • He, K., X. Zhang, S. Ren, and J. Sun, 2015: Deep residual learning for image recognition. arXiv, https://arxiv.org/abs/1512.03385.

    • Crossref
    • Export Citation
  • Huang, Y., W. Wang, and L. Wang, 2018: Video super-resolution via bidirectional recurrent convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 40, 10151028, https://doi.org/10.1109/TPAMI.2017.2701380.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ioffe, S., and C. Szegedy, 2015: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, https://arxiv.org/abs/1502.03167.

  • Johnson, J., A. Alahi, and L. Fei-Fei, 2016: Perceptual losses for real-time style transfer and super-resolution. arXiv, https://arxiv.org/abs/1603.08155.

    • Crossref
    • Export Citation
  • Kim, J., J. Lee, and K. Lee, 2016: Accurate image super-resolution using very deep convolutional networks. arXiv, https://arxiv.org/abs/1511.04587.

    • Crossref
    • Export Citation
  • Kingma, D. P., and J. Ba, 2014: Adam: A method for stochastic optimization. arXiv, https://arxiv.org/abs/1412.6980.

  • Lai, W.-S., J.-B. Huang, N. Ahuja, and M.-H. Yang, 2017: Deep Laplacian pyramid networks for fast and accurate super-resolution. arXiv, https://arxiv.org/abs/1704.03915.

    • Crossref
    • Export Citation
  • Ledig, C., and et al. , 2017: Photo-realistic single image super-resolution using a generative adversarial network. arXiv, https://arxiv.org/abs/1609.04802.

    • Crossref
    • Export Citation
  • Lim, B., S. Son, H. Kim, S. Nah, and K. Lee, 2017: Enhanced deep residual networks for single image super-resolution. arXiv, https://arxiv.org/abs/1707.02921.

    • Crossref
    • Export Citation
  • Long, J., E. Shelhamer, and T. Darell, 2014: Fully convolutional networks for semantic segmentation. arXiv, https://arxiv.org/abs/1411.4038.

    • Crossref
    • Export Citation
  • Nasrollahi, K., and T. Moeslund, 2014: Super-resolution: A comprehensive survey. Mach. Vis. Appl., 25, 14231468, https://doi.org/10.1007/s00138-014-0623-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ohsaki, Y., and K. Nakamura, 1998: Simulation-based analysis of the error caused by non-uniform beam filling and signal fluctuation in rainfall rate measurement with a spaceborne radar. J. Meteor. Soc. Japan, 76, 205216, https://doi.org/10.2151/jmsj1965.76.2_205.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Richard, A., I. Cherabier, M. R. Oswald, V. Tsiminaki, M. Pollefeys, and K. Schindler, 2020: Learned multi-view texture super-resolution. arXiv, https://arxiv.org/abs/2001.04775.

    • Crossref
    • Export Citation
  • Ronneburger, O., P. Fischer, and T. Brox, 2015: U-Net: Convolutional networks for biomedical image segmentation. arXiv, https://arxiv.org/abs/1505.04597.

  • Tao, Y., and J.-P. Muller, 2018: Super-resolution restoration of MISR images using the UCL MAGiGAN system. Remote Sens., 11, 52, https://doi.org/10.3390/rs11010052.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Timofte, R., V. De Smet, and L. Van Gool, 2015: A+: Adjusted anchored neighborhood regression for fast super-resolution. 12th Asian Conf. on Computer Vision, Singapore, 111–126, https://doi.org/10.1007/978-3-319-16817-3_8

    • Crossref
    • Export Citation
  • Veillette, M., E. Hassey, C. Mattioli, H. Iskenderian, and P. Lamey, 2018: Creating synthetic radar imagery using convolutional neural networks. J. Atmos. Oceanic Technol., 35, 23232338, https://doi.org/10.1175/JTECH-D-18-0010.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Z., A. Bovik, H. Sheikh, and E. Simoncelli, 2004: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process., 13, 600612, https://doi.org/10.1109/TIP.2003.819861.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, Z., J. Chen, and S. Hoi, 2019: Deep learning for image super-resolution: A survey. arXiv, https://arxiv.org/abs/1902.06068.

All Time Past Year Past 30 Days
Abstract Views 640 640 72
Full Text Views 185 185 17
PDF Downloads 219 219 16

Radar Super Resolution Using a Deep Convolutional Neural Network

View More View Less
  • 1 Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington
  • | 2 Department of Atmospheric Sciences, University of Washington, Seattle, Washington
© Get Permissions Rent on DeepDyve
Restricted access

Abstract

Super resolution involves synthetically increasing the resolution of gridded data beyond their native resolution. Typically, this is done using interpolation schemes, which estimate sub-grid-scale values from neighboring data, and perform the same operation everywhere regardless of the large-scale context, or by requiring a network of radars with overlapping fields of view. Recently, significant progress has been made in single-image super resolution using convolutional neural networks. Conceptually, a neural network may be able to learn relations between large-scale precipitation features and the associated sub-pixel-scale variability and outperform interpolation schemes. Here, we use a deep convolutional neural network to artificially enhance the resolution of NEXRAD PPI scans. The model is trained on 6 months of reflectivity observations from the Langley Hill, Washington, radar (KLGX), and we find that it substantially outperforms common interpolation schemes for 4× and 8× resolution increases based on several objective error and perceptual quality metrics.

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JTECH-D-20-0074.s1.

© 2020 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Andrew Geiss, avgeiss@gmail.com

Abstract

Super resolution involves synthetically increasing the resolution of gridded data beyond their native resolution. Typically, this is done using interpolation schemes, which estimate sub-grid-scale values from neighboring data, and perform the same operation everywhere regardless of the large-scale context, or by requiring a network of radars with overlapping fields of view. Recently, significant progress has been made in single-image super resolution using convolutional neural networks. Conceptually, a neural network may be able to learn relations between large-scale precipitation features and the associated sub-pixel-scale variability and outperform interpolation schemes. Here, we use a deep convolutional neural network to artificially enhance the resolution of NEXRAD PPI scans. The model is trained on 6 months of reflectivity observations from the Langley Hill, Washington, radar (KLGX), and we find that it substantially outperforms common interpolation schemes for 4× and 8× resolution increases based on several objective error and perceptual quality metrics.

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JTECH-D-20-0074.s1.

© 2020 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Andrew Geiss, avgeiss@gmail.com

Supplementary Materials

    • Supplemental Materials (PDF 3.30 MB)
Save