基于深度图像的视图合成技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
逼真场景的生成是虚拟现实研究的核心内容。从现有场景的几幅离散图像合成任意视点新视图,就是根据虚拟现实的应用需求而诞生的。视图合成技术研究如何使用两幅或多幅关于某个场景的图像,生成该场景在新视点下的相应图像。
     基于深度图像的视图合成研究的内容主要包括深度图像的获取和视图合成算法的研究。本文研究使用计算机视觉的相关算法获取图像的深度信息。主要包括相机定标、基础矩阵估计、立体匹配和深度估计等内容。本文首先研究了当前的相机定标方法,提出了改进的基于液晶显示器的相机快速定标方法。然后使用基于角点的特征匹配获得视图的稀疏匹配,通过使用改进M-Estimators方法计算得到鲁棒的基础矩阵F,再使用基于双阈值的分阶段立体匹配获取两幅视图的稠密匹配。最后通过对应点的匹配信息计算得到匹配点的三维信息。
     基于深度图像的视图合成是本文研究的重点。本文基于物体表面深度的连续性,利用对应共轭极线的整体匹配性和保序性,以及深度图像中隐含的基于广义视差的图像边界点信息,提出了基于逆向映射的深度图像视图合成算法。从一幅主参考源视图获取新视图对应像素的深度信息,进而通过逆向映射算法得到新视图在参考视图中的对应像素信息,从而得到新视图像素点的灰度信息,最终得到新视图。由于遮挡关系和投影区域扩张的原因导致新视图中不可避免地存在空洞。本文根据对极几何的相关知识,提出了改进的单点逆向映射空洞填补算法。使用了少量参考源视图,完成对合成的新视图中的空洞进行填补,并最终得到完整的新视图。
Realistic scene generation is the core of virtual reality research. From existing pieces of discrete viewpoint image synthesis a new image, in accordance with the needs of the application of virtual reality. The view synthesis technology study how to generate a new point of view of the scene under the corresponding image, with the use of two or multiple images from a same scene.
     The research of depth image-based view synthesis includes the estimation of depth image and view synthesis algorithm. Among them, this study related to the use of computer vision algorithm to obtain the image depth information. The main contents include camera calibration, fundamental matrix estimation, stereo matching and depth estimation and so on. Firstly, this paper studies the current method of camera calibration and an improved fast camera calibration method based on LCD monitor was proposed. Secondly, stereo matching based on feature of the corner points was used to obtain a sparse matching, Robust Fundamental Matrix F was estimated by using the improved M-Estimators, and then dual-threshold-based phases stereo matching was used to obtain dense matching. Finally, the three-dimensional information of the corresponding point was calculated by matching point.
     Depth image-based view synthesis is the focus of this study. A depth image view synthesis algorithm based on Inverse Mapping was presented, which was based on the continuity of the depth on the surface, using the overall matching and order holding of the polar line, as well as the boundary point of information based on generalized disparity hided in the depth image. The depth information of corresponding pixel in new image was obtained through a primary reference source, and then the gray information of corresponding pixel in new image was obtained in the reference image by the reverse mapping algorithm, eventually a new image was obtained. As the occlusion relations and projection of regional expansion in the new view, empty inevitably exist. With the knowledge of polar line, an improved algorithm based on reverse mapping of single point was presented to fill the blank space in the new image. The blank space in the new image was filled, and a complete new image was obtained, with the use of little reference source image.
引文
[1]汤杨.基于图像绘制的Image Warping理论与方法研究[D].博士学位论文.南京理工大学,2007.
    [2]王莎.基于多视点的虚拟环境生成关键技术研究[D].硕士学位论文.解放军信息工程大学,2008.
    [3]王建宇.基于图像的虚拟环境生成关键技术研究[D].博士学位论文.南京理工大学, 2005.
    [4] Lippman.A Movie maps: Anapplication of the optieal videodisc to computer graphies[J]. Proeeedings of the 7th Annual Confereneeon Computer Graphies and Interaetive Teehniques.1980,5:32-42.
    [5]吴琼玉.基于图像的视图合成技术研究[D].博士学位论文.国防科学技术大学.2007.
    [6] Adelson E H,Bergen J.The Plenoptic Function and the Elements of Early Vision[J].In Computational Models and Visual Proeessing.1991,1:3-20.
    [7] McMillan L,Bishop G.Plenoptic modeling:an image-based rendering system.In:Mair,S.G, Cook,R,ed.SIGGRAPH’95 Proeeedings.NewYork:ACM Press.1995,4:39-46.
    [8]姜忠鼎.基于深度全景视频的虚拟场景绘制技术研究[D].博士学位论文.浙江大学, 2005.
    [9] Chen S E,Williams L.Quick Time VR-An image-based approach to virtural environment navigation[J].In:SIGGRAPH.1995,4:29-38.
    [10] Sun Q J,Zhang X P.A Method of Image Zoom-in Based On Bezier Surface Interpolation [J].In:JCIS/CVPRIP.1998,9:390-393.
    [11]肖蒲.基于图像的虚拟场景绘制关键技术研究[D].博士学位论文.南京理工大学,2007.
    [12] Shum H Y,He L W.Rendering with Coneentric Mosaies[J].In Computer Graphies.In: SIGGRAPH.1999,8:299-306.
    [13] Gortler S J,Grzeszczuk R,Szeliski R,et al.The lumigraph[J].In:SIGGRAPH.1996,4: 43-54.
    [14] Aliaga D G, Carlbom I. plenoptic Stielling:A Sealable Method for Reconstructing 3D Interactive WaIkthroughs[J]. In :Computer Graphies,Siggrph.2000,11:443-450.
    [15] Seitz S M and Dyer C R. Physieally-valid view synthesis by image interpolation[J].ProeIEEE Workshop on Representations of Visual Seenes.1995,2:18-25.
    [16] Seitz S M, Dyer C R.View morphing[J].Proeeedings of ACM SIGGRAPH’96.1996,6:21-30.
    [17]胡志萍,郭明恩,欧宗瑛.一种基于图像轮廓信息的新视点图像生成算法[D].大连理工大学学报.2007,Vol.46(3):361-366.
    [18]郭明恩.基于图像的图像构建绘制和对象辨识[D].博士学位论文.大连理工大学,2008.
    [19]倪菁辰.视图变形技术研究.硕士学位论文[D].南京理工大学,2006.
    [20]衡晓周.雕塑类复杂场景建模与绘制技术的研究.硕士学位论文[D].大连海事大学,2008.
    [21]许巍,吴玲达.基于角点匹配的视图合成方法[J].系统仿真学报.2007,19(14):3263- 3266.
    [22]宋汉辰,吴玲达.基于图像对象的环绕视图生成方法研究[J].计算机工程与应用.2005,21:16-18.
    [23]徐丹,鲍歌,石教英.小波空间的视图变形合成[J].软件学报. 2000,11(4):532-539.
    [24]吴朝福,于洪川,袁波等.与视点对应的视图插补[J].中国图像图形学报.1999,4(12): 1034-1037.
    [25]李奎宇.基于深度图像的绘制算法研究[D].硕士学位论文.中国科学院软件研究所, 2006.
    [26]郑飞.基于立体视觉的三维场景重建与绘制[D].硕士学位论文.合肥工业大学,2004.
    [27] Levoy M, Whitted T.The use of points as display primitive[J].Teehnieal Report UNC- CH TR85-022,Univ.of North Carolina at Chapel Hill,DePt.of Computer Seienee,1985.
    [28] Chen S E and Williams L.View interpolation for image synthesis[J].Computer Graphies (ACMSIGGRAPH’93) .NewYork.1993,7:279-288.
    [29]周颖.深度图像的获取及其处理[D].硕士学位论文.西安电子科技大学,2008.
    [30]王海鹏.图形图像变形技术研究[D].硕士学位论文.西北工业大学,2000.
    [31] Tsai R.Y.An Efficient and accurate camera calibration technique for 3D machine vision [J].IEEE Conference on Computer Vision and Pattern Recognition.1986,3:36-37.
    [32]田涌涛,王有庆等.基于共线点的透镜畸变系数标定[J].上海交通大学学报.2001(11):1739-1741
    [33] En Peng,Ling Li.Camera calibration using one-dimensional information and its applications in both controlled and uncontrolled enviroments[J].pattern Recognition ,2010,43:1188-1198.
    [34]徐帆.无组织多视图图像的自动化三维场景重建[D].博士学位论文.华中科技大学, 2009.
    [35] Longuet-Higgins H.C.A computer algorithm for reconstructing a scene from two projections.Nature,1981,293(10):133-135.
    [36]黄磊.基于图像序列的三维虚拟城市重建关键技术研究[D].硕士学位论文.中国海洋大学,2009.
    [37] K.V.Arya,P.Gupta.Image registration using robust M-estimators[J]. pattern Recognition. 2007.28:1957-1968.
    [38]黄涛.基于图像立体匹配的三维重建[D].硕士学位论文.广西师范大学, 2009.
    [39]刘正东.计算机视觉中立体匹配技术的研究.博士学位论文.南京理工大学,2006.
    [40] EAbbe,M.Bierlaire,T.Toledo.Normalization and correlation of cross-nested logit models [J]. Transportation Research.2007,41:795-808.
    [41]李海军,李一民等.平行双目视觉系统中深度图像的生成与分析.计算机与数字工程.2006(2):50-56.
    [42]马凯.基于立体视觉的数目图像深度信息提取研究[D].硕士学位论文.南京林业大学, 2008.
    [43]鲍虎军,彭群生等.非线性视域插值[J].软件学报.1998,9:690-695.
    [44] Shade.J,Gotler.S.Layered Depth Images[J].SIGGRAPH’98.1998,6:19-24.
    [45] Chunfa Chang,Gary Bishop.LDI Tree[:A Hierarchical Representation for Image Based Rendering[J]. SIGGRAPH’99.1999,8:291-298.
    [46] M.M.Rafferty,D.G.Aliaga.Images for Accelerating Architectural Walkthroughs[J].IEEE Computer Graphics and Applications.1998,18:38-45.
    [47] Geetika Sharma,Ankita Kumar.Novel view synthesis using a translating camera[J]. pattern Recognition.2005,26:483-492.
    [48] Sammy Rogmans,Jiangbo Lu.Real-time stereo-based view synthesis algorithms[J]. Signal Processing.2009,24:49-64.
    [49] Yuji Mori,Norishige Fukushima.View generation with 3D warping using depth information for FTV[J].Signal Processing.2009,24:65-72.
    [50] S.Zinger,L.Do.Free-viewpoint depth image based rendering[J]. Vis. Commun. Image R.2010,1:1-11.
    [51] Kwan-Jung,Sehoon Yea.Hole-Filling Method Using Depth Based In-Painting for View Synthesis in FTV and 3D Vieo[J].Mitsubishi electric research aboratories.2009,7:1-4.
    [52]刘然,朱庆生等.一种用于视图合成的空洞填充算法[J].计算机应用研究. 2009,8:3146-3148.
    [53] Richard Hartley, Andrew Zisserman. Multiple View Geometry in Computer Vision[J]. Cambridge University.Press.2004,3:68-75.
    [54] Marc Pollefeys,Peinhard Koch and Luc Van.A simple and efficient reclification method for general motion[J].IEEE Press.1999,8:496-501.