用户名: 密码: 验证码:
一种光场图像空间和角度分辨率重建方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Spatial and Angular Resolution Reconstruction Method of Light-Field Image
  • 作者:郑祥祥 ; 张治安
  • 英文作者:ZHENG Xiang-xiang;ZHANG Zhi-an;School of Computer Science and Information Engineering, Hefei University of Technology;
  • 关键词:超分辨率重建 ; 光场 ; 卷积神经网络 ; 空间分辨率 ; 角度分辨率
  • 英文关键词:Super-resolution;;Light-field;;Convolutional neural network;;Spatial resolution;;Angular resolution
  • 中文刊名:DNZS
  • 英文刊名:Computer Knowledge and Technology
  • 机构:合肥工业大学计算机与信息学院;
  • 出版日期:2017-04-25
  • 出版单位:电脑知识与技术
  • 年:2017
  • 期:v.13
  • 语种:中文;
  • 页:DNZS201712078
  • 页数:3
  • CN:12
  • ISSN:34-1205/TP
  • 分类号:177-179
摘要
光场相机能够通过一次拍摄获取包含空间和角度的四维信息。然而,光场图像空间分辨率较低,角度分辨率也无法满足应用需求。针对此问题,提出一种基于卷积神经网络的光场图像超分辨率重建方法,同时提高光场图像的空间分辨率和角度分辨率。首先通过空间分辨率重建网络恢复子孔径图像的高频细节,然后根据子孔径图像位置,设计三种不同的角度分辨率重建网络在子孔径图像间插入新的视角。实验结果表明,该文方法与其他先进方法相比,在定性和定量评价方面均取得较好的重建效果。
        Light-field cameras capture 2D spatial and 2D angular information in a single shot. Nevertheless, the spatial resolutions of images rendered from light-field camera are relatively low. Besides, angular resolutions cannot meet application requirements given the limited number of viewpoints. In this paper, we present a novel light-field super-resolution(SR) method to simultaneously enhance both the spatial and angular resolutions of a light field image using a Convolutional Neural Network(CNN).We first augment the spatial resolution of each sub-aperture image by a spatial SR network, then novel views between super-resolved sub-aperture images are generated by three different angular SR networks according to the novel view locations. Experimental results demonstrate that in terms of visual effects and evaluation metrics, the reconstruction results of the proposed methods is superior to those of state-of-the-art methods.
引文
[1]Adelson E H,Wang J Y A.Single lens stereo with a plenoptic camera[J].IEEE Transactions on Pattern Analysis&Machine Intelligence,1992,14(14):99-106.
    [2]Li N,Sun B,Yu J.A weighted sparse coding framework for saliency detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2015:5216-5223.
    [3]Jeon H G,Park J,Choe G,et al.Accurate depth map estimation from a lenslet light field camera[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).IEEE,2015:1547-1555.
    [4]Xu Y,Maeno K,Nagahara H,et al.Light field distortion feature for transparent object classification[J].Computer Vision and Image Understanding,2015(139):122-135.
    [5]Bishop T E,Zanetti S,Favaro P.Light field superresolution[C]//Computational Photography(ICCP),2009 IEEE International Conference on.IEEE,2009:1-9.
    [6]Wanner S,Goldluecke B.Spatial and angular variational super-resolution of 4D light fields[C]//European Conference on Computer Vision.Springer Berlin Heidelberg,2012:608-621.
    [7]Mitra K,Veeraraghavan A.Light field denoising,light field superresolution and stereo camera based refocussing using a GMM light field patch prior[C]//Computer Vision and Pattern Recognition Workshops.IEEE,2012:22-28.
    [8]Dong C,Loy C C,He K,et al.Learning a deep convolutional network for image super-resolution[C]//European Conference on Computer Vision.Springer International Publishing,2014:184-199.
    [9]常亮,邓小明,周明全,等.图像理解中的卷积神经网络[J].自动化学报,2016,42(9):1300-1312.
    [10]Liao R,Tao X,Li R,Et al.Video super-resolution via deep draft-ensemble learning[C]//Proceedings of the IEEE International Conference on Computer Vision,2015:531-539.
    [11]Yoon Y,Jeon H G,Yoo D,et al.Light Field Image SuperResolution using Convolutional Neural Network[J].IEEE Signal Processing Letters,2017.
    [12]Abadi M,Agarwal A,Barham P,et al.Tensorflow:Largescale machine learning on heterogeneous distributed systems[J].ar Xiv preprint ar Xiv:1603.04467,2016.
    [13]Wanner S,Meister S,Goldluecke B.Datasets and Benchmarks for Densely Sampled 4D Light Fields[C]//VMV.2013:225-226.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700