立体图像和视频编辑的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着3D电影“阿凡达”获得巨大成功,立体图像和视频在最近几年变得越来越流行。一幅立体(3D)图像由两幅通常的2D图像构成,这两幅2D图像是在同一时间从两个稍微不同的视点拍摄同一个场景获得的。当一幅立体图像/视频显示在屏幕上面的时候,观看者通过佩戴合适的观看设备,可以使得左眼只看到左视点的图像,右眼也只能看到右视点的图像。视觉系统将同时获得的左右视点图像传递到大脑以后,人类的大脑能够融合这两个稍微有差别的图像从而计算出3D场景的景深信息。因为立体图像能够传递更多的视觉信息,并且显示效果更加逼真,立体图像和视频被认为是图像和视频未来发展的主要方向。
     虽然存在大量的算法和软件工具可以处理2D图像/视频,但是可以拿来处理立体图/视频的工具却非常少。处理立体图像/视频要比处理2D图像/视频更加困难,主要有三个原因。首先,获取精确和没有噪声的视差图/深度图比较困难。立体匹配算法尝试计算不同试点间像素的对应关系。虽然经过了多年的研究,但是效果仍然不理想,尤其是计算复杂自然场景的视差图则结果更差。即使我们采用深度相机来获得现实场景的深度图,产生高分辨率和没有噪声的深度图像还是比较困难。原因是现有的深度相机产生的深度图像的分辨率很低,而且相机本身比较笨重和昂贵。其次,编辑立体图像时,保证左右视点图像的一致性关系比较困难。结果立体图像中左右视点图像的一致性对于最大限度的减少图像失真和产生高质量的结果极为重要。实际处理中,左右视点图像常常需要同时进行处理以确保实验结果中左右图像的一致性,例如左右图像同时放在一个全局优化算法中进行处理。可见立体图像/视频处理算法通常要比2D图像/视频处理算法要复杂,并且需要较高的计算开销和内存开销。再次,我们需要保证立体视频相邻帧之间运动和深度的一直性,以消除结果视频中相邻帧之间可能存在的抖动问题。在这篇论文中,我们讨论立体图像编辑面临的深层次问题,尝试解决这些技术困难来提供高效的立体图像/视频编辑算法。在论文中,我们主要提供以下三个立体图像和视频编辑的方法。
     首先,我们提出一种新的立体视频深度调整方法。目前几乎所有3D电影拍摄时主要考虑要适合在影院的大屏幕上面播放,观众离屏幕有一定的距离,以此来计算目标视频的深度范围。如果在3D电视,电脑屏幕或者手机上面播放这样的立体视频时,视频原有的深度范围将会被大大削减,会严重影响视频观看时的立体效果。这不利于立体图像和视频在尺寸比较小的数码移动设备中的传播和欣赏。因此,我们提出一种线性的深度映射方法来调整立体视频的深度范围。我们的方法根据立体视频播放时的观看参数来计算立体视频放映时实际深度范围,比如屏幕尺寸和分辨率,观看者到屏幕的距离。同时考虑人眼的立体视觉特征,例如图像中物体间相对深度对于人眼深度感知的重要性,人眼对直线,平面发生扭曲敏感性。我们提出的方法能够最小化图像内容的失真,主要是通过保护图像中相邻特征点之间的相对深度,防止图像中直线和平面的扭曲。我们的方法能够保护立体视频包含的三维场景空间结构,使其不会因为图像深度范围发生改变而被损坏。我们的方法还保护立体视频相邻帧之间深度和运动的一致性。深度一致性确保立体图像中物体在相邻视频帧之间深度的改变是平滑的。运动一致性的目的是确保左右视点相邻视频序列中物体的运动都是比较平滑的。实验结果显示我们的方法提升了立体视频的立体效果,能输出高质量的实验结果,使得图像失真最小化。
     其次,为了得到高质量的立体图深度映射和其他立体图像编辑效果,我们尝试拓展shift-map算法使之可以用来编辑立体图像。我们使用一个全局优化方法,能够在像素级同时处理左右视点图像。我们的方法确保左右视点图像的一致性,并且保护图像传递的3D场景结构信息。另外,我们的方法还可以解决遮挡和去除遮挡的问题,这使得我们的方法有能力解决很多立体图像的编辑问题,例如立体图像深度映射,立体图像中物体深度的调整和非均匀的图像尺寸缩放等。实验结果证明我们的方法具备的各种立体图像编辑功能均能产生高质量结果。
     再次,我们提出一种可以生成无限立体全景图的方法。无限立体全景图是指通过拼接图片来生成全景图像,并且通过不断拼接立体图像使得使全景图的宽度可以不断的延伸。这些用来进行拼接的立体图像描述相类似的场景,但是可能是在不同地理位置拍摄得到的。无限立体全景图可以被用来产生虚拟现实中非常有趣的游走场景等。生成无限立体全景图的一个最重要的问题是如何无缝的拼接两幅立体图像。尽管存在非常多的2D图像拼接方法,这些方法可能无法处理立体图像,原因是保证视差一致性可能会比较困难。在论文中,我们提出一种拼接立体图像的方法。我们首先用图分割算法来找到一对接缝,沿着这条接缝我们可以分别拼接左右视点图像。在计算这对接缝时,我们尽可能地使得拼接以后接缝两侧内容比较平滑,抑制可能产生的视觉错误。然后我们采用一个基于图像形变的视差调整算法来进一步抑制接缝两侧的图像深度跃变。我们的方法可以生成高质量的无限立体全景图,实验结果证明了我们提出的方法的有效性。
With the success of the3D movie "Avatar", stereo videos have become very pop-ular in recent years. In general, each stereo image contains two regular2D images captured from the same scene at the same time but from slightly different viewing loca-tions. When a stereo image/video is displayed on the screen, with appropriate devices, viewers see one2D regular image/frame with the left eye and the other with the right eye. The human brain will then fuse the two images/frames together to produce3D scene depth information. As stereo images can convey more visual information, stereo media are considered as one of the main research directions of future development.
     Although there are a lot of tools available for editing traditional2D imags/videos, tools for editing3D media are very limited. In general, editing and processing stereo images/videos are more difficult than those of2D images/videos, due to three major reasons. First, it is difficult to obtain noise-free and accurate disparity/depth maps for stereo images/videos. Stereo matching methods, which aim at finding correspondences between pixels in the left and right images, generally do not perform very well, espe-cially for stereo images of natural scenes. Even we use a depth camera, to obtain high resolution and noise-free depth maps from the low resolution and noisy output is still difficult. Second, it is difficult to ensure the spacial coherence between left and right images of stereo image pair, which is very important for minimizing distortion and producing high quality results. In practice, the left and right images usually need to be simultaneously processed in order to enforce the coherence between left and right images, such as processing by a global optimization. Thus, algorithms for processing stereo media are usually more complex than those for2D media, with high computa-tional and memory costs. Third, we need to ensure both motion and depth coherences across neighboring frames. In this thesis, our aim is to discuss fundamental problems existing in stereo image and video editing, at the same time attempts to address these technical difficulties and provides users with a number of editing methods for process-ing stereo images/videos. We mainly introduce three editing methods as follows.
     First, we propose a novel depth mapping method for stereo video depth mapping. Most stereo videos are developed primarily for viewing on large screens located at some distance away from the viewer. If we watch these videos on a small screen lo-cated near to us, the depth range of the videos will be seriously reduced, which can significantly degrade their3D effects. In order to address this problem, we propose a linear depth mapping method to adjust the depth range of a stereo video accord-ing to the viewing configuration, including pixel density and distance to the screen. We also consider characters of human binocular vision, such as relative depth among objects to depth perception, human eyes sensitivity to straight lines and planes. Our method tries to minimize the distortion of stereo image contents, by preserving the relationship of neighboring features and preventing line and plane bending. It also considers motion and depth coherences across neighboring frames. While depth co-herence ensures smooth changes of the depth field across frames, motion coherence ensures smooth content changes across frames. Our experimental results show that the proposed method can improve the stereoscopic effects while maintaining the quality of the output videos.
     Second, in order to obtain high quality depth mapping and other stereo editing effect, we extend the shift-map method for stereo image editing. Our method simulta-neously processes the left and right images on pixel level using a global optimization algorithm. It enforces photo consistence between the two images and preserves3D scene structures. It also addresses the occlusion and disocclusion problems, which may enable many stereo image editing functions, such as depth mapping, object depth adjustment and non-homogeneous image resizing. Our experimental results show that the proposed method produces high quality results with a number of editing functions.
     Third, we propose a method for creating infinite stereo panoramas. A stereo in-finite panorama is a panoramic image that can be infinitely extended by continuously stitching together stereo images that depict similar scenes, but may be taken from dif-ferent geographic locations. It can be used to create interesting walkthrough environ-ment. An important issue underlying this application is to seamlessly stitch two stereo images together. Although many methods have been proposed for stitching2D images, they may not work well on stereo images, due to the difficulty in ensuring disparity consistency. In this thesis, we propose a novel method to stitch two stereo images seamlessly. We first apply the graph cut algorithm to compute a seam for stitching, with a novel disparity-aware energy function to both ensure disparity continuity and suppress visual artifacts around the seam. We then apply a modified warping-based disparity scaling algorithm to suppress the seam in the depth domain. Our experimen-tal results show that the proposed stitching method is capable of producing high quality stereo infinite panoramas.
引文
[1]The history of stereo photography,1996. http://www.arts.rpi.edu/-ruiz/stereo_history/text/historystereog.html.
    [2]S. Peleg and M. Ben-Ezra. Stereo panorama with a single camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),1999.
    [3]S. Peleg, M. Ben-Ezra, and Y. Pritch. Omnistereo:panoramic stereo imaging. IEEE Trans.on Pattern Analysis and Machine Intelligence,23(3):279-290,2001.
    [4]J. Koppal, C. Zitnick, and M. Cohen. A viewer-centric editor for 3D movies. IEEE Computer Graphics & Applications,31:20-35,2011.
    [5]F. Zilly, J. Kluger, and P. Kauff. Production rules for stereo acquisition. Proceed-ings of the IEEE,99(4):590-606, April 2011.
    [6]Simon Heinzle, Pierre Greisen, David Gallup, Christine Chen, Daniel Saner, Aljoscha Smolic, Andreas Burg, Wojciech Matusik, and Markus Gross. Com-putational stereo camera system with programmable control loop. ACM Trans. on Graphics,30, Aug.2011.
    [7]V. Couture, M.S. Langer, and S. Roy. Panoramic stereo video textures. In IEEE International Conference on Computer Vision (ICCV), pages 1251-1258,2011.
    [8]Luis E. Gurrieri and Eric Dubois. Stereoscopic cameras for the real-time acqui-sition of panoramic 3d images and videos. SPIE,8648:86481W-86481W-17, 2013.
    [9]A. Smolic, K. Mueller, N. Stefanoski, J. Ostermann, A. Gotchev, G.B.Akar, G. Triantafyllidis, and A. Koz. Coding algorithms for 3dtv-a survey. IEEE Transac-tions on Circuits and Systems for Video Technology,17(11):1606-1621,2007.
    [10]A. Vetro, A.M. Tourapis, K. Muller, and Tao Chen.3d-tv content storage and transmission. IEEE Transactions on Broadcasting,57(2):384-394,2011.
    [11]K. Muller, P. Merkle, and T. Wiegand.3-d video representation using depth maps. Proceedings of the IEEE,99(4):643-656,2011.
    [12]L. Wang, H. Jin, R. Yang, and M. Gong. Stereoscopic inpainting:Joint color and depth completion from stereo images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, June 2008.
    [13]A.V. Bhavsar and A.N. Rajagopalan. Resolution enhancement in multi-image stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence,32(9): 1721-1728,2010.
    [14]Manuel Lang, Alexander Hornung, Oliver Wang, Steven Poulakos, Aljoscha S-molic, and Markus Gross.Nonlinear disparity mapping for stereoscopic 3D. ACM Trans.on Graphics,29(3),2010.
    [15]Wan-Yen Lo, Jeroen van Baar, Claude Knaus, Matthias Zwicker, and Markus Gross. Stereoscopic 3d copy & paste. ACM Trans. Graph.,29(6):147:1-147:10, Dec.2010. ISSN 0730-0301.
    [16]A. Smolic, P. Kauff, S. Knorr, A. Hornung, and Kunter. Three-dimensional video postproduction and processing. Proceedings of the IEEE,99(4):607-625, Apr. 2011.
    [17]C. Liu, T. Huang, M. Chang, K. Lee, C. Liang, and Y. Chuang.3D cinematogra-phy principles and their applications to stereoscopic media processing. In ACM Multimedia,2011, pages 253-262,2011.
    [18]P. Ndjiki-Nya, M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand. Depth image-based rendering with advanced texture synthesis for 3-d video. IEEE Transactions on Multimedia,13(3):453-465,2011.
    [19]Xuan Yang, Linling Zhang, Tien-Tsin Wong, and Pheng-Ann Heng. Binocular tone mapping. ACM Trans. Graph.,31(4):93:1-93:10, Jul.2012.
    [20]A.Joulin and S.B. Kang. Recovering stereo pairs from anaglyphs. In IEEE Con-ference on Computer Vision and Pattern Recognition (CVPR), June 2013.
    [21]F. Liu, Y. Niu, and H. Jin. Casual stereoscopic photo authoring. IEEE Transac-tions on Multimedia,15(1):129-140, jan.2013.
    [22]S. Du, S. Hu, and R. Martin. Changing perspective in stereoscopic images. IEEE Trans.on Visualization and Computer Graphics,19(8):1288-1297,2013.ISSN 1077-2626.
    [23]N. Stefanoski, O. Wang, M. Lang, P. Greisen, S. Heinzle, and A. Smolic. Au-tomatic view synthesis by image-domain-warping. IEEE Transactions on Image Processing,22(9):3329-3341,2013.
    [24]Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence,23 (11):1222-1239,2001.
    [25]Daniel Scharstein and Richard Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision,47(1-3):7-42,2002.
    [26]B.M. Smith, Li Zhang, and Hailin Jin. Stereo matching with nonparametric s-moothness priors in feature space. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2009.
    [27]Liang Wang and Ruigang Yang. Global stereo matching leveraged by sparse ground control points. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3033-3040,2011.
    [28]Dongbo Min, Jiangbo Lu, and Minh N Do. Depth video enhancement based on weighted mode filtering. IEEE Transactions on Image Processing,21(3):1176-1190,2012.
    [29]Sebastian Knorr, Matthias Kunter, and Thomas Sikora. Stereoscopic 3d from 2d video with super-resolution capability. Signal Processing:Image Communica-tion,23(9):665-676,2008.
    [30]M. Guttmann, L. Wolf, and D. Cohen-Or. Semi-automatic stereo extraction from video footage. In IEEE International Conference on Computer Vision (ICCV), pages 136-142,2009.
    [31]Bryan C Russell and Antonio Torralba. Building a database of 3d scenes from us-er annotations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2711-2718,2009.
    [32]B. Ward, S. Kang, and E. Bennett. Depth director:A system for adding depth to movies. IEEE Computer Graphics & Applications,31(1):36-48, Jan.2011.
    [33]Miao Liao, Jizhou Gao, Ruigang Yang, and Minglun Gong. Video stereolization: Combining motion analysis with user interaction. IEEE Trans.on Visualization and Computer Graphics,17(12),2011. ISSN 1077-2626.
    [34]Zhebin Zhang, Chen Zhou, Bo Xin, Yizhou Wang, and Wen Gao. An interac-tive system of stereoscopic video conversion. In The 20th ACM international conference on Multimedia, MM'12, pages 149-158,2012.
    [35]Kevin Karsch, Ce Liu, and Sing Bing Kang. Depth extraction from video using non-parametric sampling. In European Conference on Computer Vision (ECCV), pages 775-788,2012.
    [36]Lubor Ladicky, Paul Sturgess, Chris Russell, Sunando Sengupta, Yalin Bastanlar, William Clocksin, and Philip HS Torr. Joint optimization for object class segmen-tation and dense stereo reconstruction. International Journal of Computer Vision, 100(2):122-133,2012.
    [37]Tao Yan, Rynson W.H. Lau, Zhifeng Xie, Yun Xu, Liusheng Huang, and L-izhuang Ma. Geometrical transformation for image resizing. In IEEE Interna-tional Conference on Multimedia and Expo,2011.
    [38]Robert Cormack and Robert Fox. The computation of disparity and depth in stereograms. Attention, Perception, and Psychophysics,38:375-380,1985.
    [39]Marc Lambooij, Wijnand IJsselsteijn, Marten Fortuin, and Ingrid Heynderickx. Visual discomfort and visual fatigue of stereoscopic displays:A review. Journal of Imaging Science and Technology,53,2009.
    [40]M. Lambooij, W. IJsselsteijn, and I. Heynderickx. Visual discomfort of 3D tv: Assessment methods and modeling. Displays,32:209-218, Oct.2011.
    [41]Tao Yan, Rynson Lau, Yun Xu, and Liusheng Huang. Depth mapping for stereo-scopic videos. International Journal of Computer Vision,102(1-3):293-307, Mar. 2013. ISSN 0920-5691.
    [42]Che-Han Chang, Chia-Kai Liang, and Yung-Yu Chuang. Content-aware display adaptation and interactive editing for stereoscopic images. IEEE Transactions on Multimedia,13(4):589-601, aug.2011.
    [43]Ken-Yi Lee, Cheng-Da Chung, and Yung-Yu Chuang. Scene warping:Layer-based stereoscopic image resizing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 49-56,2012.
    [44]T. Basha, Y. Moses, and S. Avidan. Geometrically consistent stereo seam carving. In IEEE International Conference on Computer Vision (ICCV), pages 1816-1823, 2011.
    [45]Sheng-Jie Luo, I-Chao Shen, Bing-Yu Chen, Wen-Huang Cheng, and Yung-Yu Chuang. Perspective-aware warping for seamless stereoscopic image cloning. ACM Trans. Graph.,31(6):182:1-182:8, November 2012. ISSN 0730-0301.
    [46]Ruo feng Tong, Yun Zhang, and Ke-Li Cheng. Stereopasting:Interactive compo-sition in stereoscopic images. IEEE Transactions on Visualization and Computer Graphics,19(8):1375-1385,2013.
    [47]Yuzhen Niu, Wu-Chi Feng, and Feng Liu. Enabling warping on stereoscopic images. ACM Trans. Graph.,31(6):183:1-183:7, November 2012. ISSN 0730-0301.
    [48]Yuzhen Niu, Feng Liu, Wu-Chi Feng, and Hailin Jin. Aesthetics-based stereo-scopic photo cropping for heterogeneous displays. IEEE Transactions on Multi-media,14(3):783-796, June 2012.
    [49]C. Schmid and A. Zisserman. Automatic line matching across views. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 666-671,1997.
    [50]G. Schindler, P. Krishnamurthy, and F. Dellaert. Line-based structure from motion for urban environments. In Third International Symposium on 3D Data Process-ing, Visualization, and Transmission, pages 846-853,2006.
    [51]Noah Snavely, Steven M. Seitz, and Richard Szeliski. Modeling the world from internet photo collections. Int. J. Comput. Vision,80(2):189-210, Nov.2008. ISSN 0920-5691.
    [52]Sudipta N Sinha, Drew Steedly, and Richard Szeliski. Piecewise planar stereo for image-based rendering. In ICCV, pages 1881-1888,2009.
    [53]Bin Fan, Fuchao Wu, and Zhanyi Hu. Line matching leveraged by point corre-spondences. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 390-397,2010.
    [54]D. Gallup, J.-M. Frahm, and M. Pollefeys. Piecewise planar and non-planar stereo for urban scene reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1418-1425,2010.
    [55]L. Chauvier, K. Murray, S. Parnall, R. Taylor, and J. Walker. Does size matter? the impact of screen size on stereoscopic 3dtv. Presented at IBC Conference (al-so available at www.visionik.dk/pdfs/3DTVDoesSizeMatter_IBC2010Award.pdf), September 2010.
    [56]H. Lin, S. Guan, C. Lee, and M. Ouhyoung. Stereoscopic 3D experience opti-mization using cropping and warping. In ACM SIGGRAPH Asia Sketches,2011.
    [57]David Lowe. Distinctive image features from scale-invariant keypoints. Interna-tional Journal of Computer Vision,60:91-110, Nov.2004.
    [58]Jean Bouguet. Pyramidal implementation of the lucas kanade feature tracker description of the algorithm. Technical Report, Intel Corporation Microprocessor Research Labs,2000.
    [59]Yu Wang, Chiew Tai, Olga Sorkine, and Tong Lee. Optimized scale-and-stretch for image resizing. ACM Trans, on Graphics,27, Dec.2008.
    [60]Philipp Krahenbiihl, Manuel Lang, Alexander Hornung, and Markus Gross. A system for retargeting of streaming video. ACM Trans. Graph.,28(5):126:1-126:10, December 2009.
    [61]Yuzhen Niu, Feng Liu, Xueqing Li, and Michael Gleicher. Warp propagation for video resizing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 537-544,2010.
    [62]Yu Wang, Jen Hsiao, Olga Sorkine, and Tong Lee. Scalable and coherent video resizing with per-frame optimization. ACM Trans, on Graphics,30, Aug.2011.
    [63]T. Takeda, K. Hashimoto, N. Hiruma, and Y. Fukui. Characteristics of accommo-dation toward apparent depth. Vision Research,39:2087-2097, June 1999.
    [64]Y. Pritch, E. Kav-Venaki, and S. Peleg. Shift-map image editing. In IEEE Inter-national Conference on Computer Vision (ICCV), pages 151-158,2009.
    [65]V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence,26(2): 147-159,2004.
    [66]N. Snavely, S. Seitz, and R. Szeliski. Photo tourism:exploring photo collections in 3D. ACM Trans, on Graphics,25(3):835-846, Jul.2006.
    [67]B. Kaneva, J. Sivic, A. Torralba, S. Avidan, and W. Freeman. Infinite images: Creating and exploring a large photorealistic virtual space. Proceedings of the IEEE,98(8):1391-1407,2010.
    [68]J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uytten-daele, and D. Lischinski. Deep photo:model-based photograph enhancement and viewing. In ACM SIGGRAPH Asia,2008.
    [69]L. Chan and A. Efros. Automatic generation of an infinite panorama. Technical report, Carnegie Mellon University, Dec.2007.
    [70]Y. Feng, J. Ren, and J. Jiang. Generic framework for content-based stereo im-age/video retrieval. Electronics Letters,47(2):97-98,2011.
    [71]R. Szeliski and H. Shum. Creating full view panoramic image mosaics and envi-ronment maps. In ACM SIGGRAPH, pages 251-258,1997.
    [72]H. Shum and R. Szeliski. Construction and refinement of panoramic mosaics with global and local alignment. In International Conference on Computer Vision (ICCV), pages 953-956,1998.
    [73]P. Perez, M. Gangnet, and A. Blake. Poisson image editing. ACM Trans, on Graphics,22(3):313-318, Jul.2003. ISSN 0730-0301.
    [74]Tao Yan, Shengfeng He, Rynson W.H. Lau, and Yun Xu. Consistent stereo image editing. In ACM Multimedia,2013.
    [75]T. Basha, Y. Moses, and S. Avidan. Stereo seam carving a geometrically con-sistent approach. IEEE Trans.on Pattern Analysis and Machine Intelligence (to appear),2013.
    [76]B. Zitova and J. Flusser. Image registration methods:a survey. Image and Vision Computing,21(11):977-1000, Oct.2003.
    [77]A. Levin, A. Zomet, S. Peleg, and Y. Weiss. Seamless image stitching in the gradient domain. In European Conference on Computer Vision (ECCV), volume 3024, pages 377-389, Jan.2004.
    [78]D. Milgram. Adaptive techniques for photomosaicking. IEEE Trans.on Comput-ers, C-26(11):1175-1180,1977.
    [79]A. Efros and W. Freeman. Image quilting for texture synthesis and transfer. In ACM SIGGRAPH, pages 341-346,2001.
    [80]J. Davis. Mosaics of scenes with moving objects. In IEEE Conference on Com-puter Vision and Pattern Recognition (CVPR), pages 354-360,1998.
    [81]B. Summa, J. Tierny, and V. Pascucci. Panorama weaving:fast and flexible seam processing. ACM Trans.on Graphics,31(4), July 2012.
    [82]V. Kwatra, A. Schodl, I. Essa, G. Turk, and A. Bobick. Graphcut textures:image and video synthesis using graph cuts. ACM Trans.on Graphics,22(3):277-286, July 2003.
    [83]A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen. Interactive digital photomontage. ACM Trans.on Graphics,23(3):294-302, Aug.2004.
    [84]N. Gracias, M. Mahoor, S. Negahdaripour, and A. Gleason. Fast image blending using watersheds and graph cuts. Image and Vision Computing,27(5):597-607, Apr.2009.
    [85]R. Szeliski. Image alignment and stitching:A tutorial. Technical report, MSR-TR-2004-92, Microsoft Research,2004.
    [86]H. Huang and Y. Hung. Panoramic stereo imaging system with automatic dis-parity warping and seaming. Graphical Models and Image Processing,60(3): 196-208, May 1998.
    [87]Panoramic photography,2013.http://en.wikipedia.org/wiki/ Panoramic_photography.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700