基于稠密卷积神经网络的遥感图像自动色彩校正
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Automatic color correction for remote sensing optical image based on dense convolutional networks
  • 作者:朱思捷 ; 雷斌 ; 吴一戎
  • 英文作者:ZHU Sijie;LEI Bin;WU Yirong;School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences;Key Laboratory of Spatial Information Processing and Application System of Chinese Academy of Sciences, Institute of Electrics,Chinese Academy of Sciences;
  • 关键词:遥感光学图像 ; 卷积神经网络 ; 色彩校正 ; 自动化
  • 英文关键词:remote sensing optical image;;convolutional neural networks;;color correction;;automation
  • 中文刊名:ZKYB
  • 英文刊名:Journal of University of Chinese Academy of Sciences
  • 机构:中国科学院大学电子电气与通信工程学院;中国科学院电子学研究所中国科学院空间信息处理与应用系统技术重点实验室;
  • 出版日期:2019-01-15
  • 出版单位:中国科学院大学学报
  • 年:2019
  • 期:v.36
  • 基金:国家自然科学基金(61331017)资助
  • 语种:中文;
  • 页:ZKYB201901013
  • 页数:8
  • CN:01
  • ISSN:10-1131/N
  • 分类号:96-103
摘要
对于单幅遥感光学图像,目前已经有很多有效的色彩校正算法,但是这些算法需要人工经验或对场景的先验知识,无法满足对快速增长的海量遥感图像进行自动化处理的需求。针对这一问题,提出一种基于稠密卷积神经网络的遥感图像自动色彩校正方法DCN(dense convolutional networks)。该模型可以预测遥感图像的RGB通道的颜色校正系数K,从而对原始图像进行自动色彩校正。DCN使用稠密模块代替部分卷积层,用更少的层数实现更多的连接。DCN模型由3 000幅GF-2号遥感图像在Tensorflow框架上训练得到,损失函数为颜色校正系数向量与真值向量之间的色偏角θ。经过测试验证,校正后的图像与原图像仅有很小的色偏角,且与真实地物颜色吻合。与传统方法相比,该方法在训练后,可直接使用生成的模型对训练集中未出现的图像进行颜色校正,无需对场景的先验知识和人工经验,也无需参考图像,可实现对海量遥感光学图像的自动化色彩校正。与传统的卷积神经网络CNN(convolutional neural networks)相比,基于DCN的模型拥有更少的参数和更好的泛化能力,而且不受输入图像大小的限制,在测试集上有更好的结果。
        Many effective color correction algorithms have been proposed for single remote sensing optical image. However, these methods need prior knowledge or experience which is not feasible for automatic color correction of mass remote sensing images. In this work, a method based on dense convolutional networks named DCN(dense convolutional networks) is proposed for automatic color correction for remote sensing optical images. This model predicts the color correction parameter K for each RGB channel to correct the remote sensing optical images. In our experiment, the model is trained on 3 000 crops of GF-2 remote sensing images on the Tensorflow framework and the loss function is the angle between the predicted 3-channel K and the ground truth. Results show that the corrected image is in very good agreement with the ground truth and DCN outperforms the color correction method based on traditional CNN(convolutional neural networks). This method meets the demand of automatic color correction in large remote sensing datasets.
引文
[1] Wu J, Wang D, Bauer M E. Image-based atmospheric correction of QuickBird imagery of Minnesota cropland[J]. Remote Sensing of Environment, 2005, 99(3):315-325.
    [2] Fecker U, Barkowsky M, Kaup A. Histogram-based prefiltering for luminance and chrominance compensation of multiview video[J]. IEEE Transactions on Circuits & Systems for Video Technology, 2008, 18(9):1 258-1 267.
    [3] Kim S J, Pollefeys M. Robust radiometric calibration and vignetting correction[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2008, 30(4):562-576.
    [4] Buchsbaum G. A spatial processor model for object colour perception[J]. Journal of the Franklin Institute, 1980, 310(1):1-26.
    [5] Van d W J, Gevers T, Gijsenij A. Edge-based color constancy[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2007, 16(9):2 207-2 214.
    [6] Hirakawa K, Chakrabarti A, Zickler T. Color constancy with spatio-spectral statistics[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2012, 34(8):1 509-1 519.
    [7] Cheng D, Prasad D K, Brown M S. Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution[J]. Journal of the Optical Society of America A Optics Image Science & Vision, 2014, 31(5):1 049.
    [8] Perkins T, Adlergolden S M, Berk A, et al. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery[J]. Optical Engineering, 2012, 51(11):1 707.
    [9] Gao B C, Montes M J, Davis C O, et al. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean[J]. Remote Sensing of Environment, 2009, 113(9):S17-S24.
    [10] Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3):211-252.
    [11] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012:1 097-1 105.
    [12] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. CoRR, 2014.arXiv:1 409.1 556.
    [13] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Computer Vision and Pattern Recognition. IEEE, 2016:770-778.
    [14] Zhu X X, Tuia D, Mou L, et al. Deep learning in remote sensing: a comprehensive review and list of resources[J]. IEEE Geoscience & Remote Sensing Magazine, 2018, 5(4):8-36.
    [15] Chen S, Wang H, Xu F, et al. Target classification using the deep convolutional networks for SAR images[J]. IEEE Transactions on Geoscience & Remote Sensing, 2016, 54(8):4 806-4 817.
    [16] Pasolli E, Melgani F, Tuia D, et al. SVM active learning approach for image classification using spatial information[J]. IEEE Transactions on Geoscience & Remote Sensing, 2014, 52(4):2 217-2 233.
    [17] Lary D J, Remer L A, Macneill D, et al. Machine learning and bias correction of MODIS aerosol optical depth[J]. IEEE Geoscience & Remote Sensing Letters, 2009, 6(4):694-698.
    [18] Ali I, Greifeneder F, Stamenkovic J, et al. Review of machine learning approaches for biomass and soil moisture retrievals from remote sensing data[J]. Remote Sensing, 2015, 7(12):221-236.
    [19] Gregor B, Adlergolden S M. Quick atmospheric correction code: algorithm description and recent upgrades[J]. Optical Engineering, 2012, 51(11):1 719.
    [20] Barron J T. Convolutional color constancy[C]//IEEE International Conference on Computer Vision. IEEE, 2016:379-387.
    [21] Barron J T, Tsai Y T. Fast Fourier color constancy[C]//Computer Vision and Pattern Recognition. IEEE, 2017:6 950-6 958.
    [22] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Computer Vision and Pattern Recognition. IEEE, 2015:3 431-3 440.
    [23] Huang G, Liu Z, Weinberger K Q, et al. Densely connected convolutional networks[C]//Computer Vision and Pattern Recognition. IEEE, 2017: 2 261-2 269.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700