3D视频的深度图优化与深度编码方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
3D视频与2D视频的本质区别就在于3D在2D的基础上添加了深度信息,能够产生立体视觉感受,使其在自然场景的表征上更具真实感。深度信息是3D场景捕获中一个非常重要的几何量,它反映的是场景中物体到成像平面的距离。由于深度图与纹理图相比不仅更能节省码流,还能方便灵活的利用DIBR绘制出不同视角的虚拟视点;而且,近几年来,随着深度相机技术的迅速发展,价格适中的深度相机的推出为深度图的获取提供了直接快速的方式。所以,无论是从理论上还是从实际应用上,基于深度的3D视频都是非常有效和可行的方案。本文围绕3D视频中的深度图展开,研究深度图的优化和高效编码方法。本文的第一个部分重点研究高精度深度图的获取,它是后面研究工作开展的前提条件;后面三个部分研究的核心是利用深度图的特性挖掘新的深度图编码方法,在提高深度编码效率的同时保证合成视点的质量。
     深度相机虽然能方便、快捷地获取场景的深度信息,但是由于技术的限制,目前的深度相机得到的深度图存在分辨率低,有大量空洞等缺陷,无法直接应用到实际系统中。本文首先分析了图像插值、图像去噪等图像处理问题中两类统计建模方法的优缺点,针对深度图像的特性,结合参数化模型和非参数化模型的优点,充分利用图像不同分辨率之间的相似性以及图像内部的相似性特征,提出了一种基于混合参数模型的分级深度图优化方法。该方法在逐级修补深度图空洞的同时保存了深度图边缘特征。
     深度图的特点是不用来输出显示,而是用来合成一个新的视点,因此深度编码中量化带来的误差会造成合成视点的失真,应该用合成视点的失真来衡量深度编码的失真。本文从这个角度出发,探索一种以合成视点失真最小化为目标的深度编码方法。通过推导深度失真与几何失真的关系,以及几何失真与合成视点失真的关系,建立深度失真与合成视点失真模型,并将此模型应用到深度编码与联合码率分配中去。实验结果表明,与现有方法相比,本文方法能合理分配纹理和深度的编码码率,得到较高的视点合成质量。
     深度失真会带来合成视点的失真,而且这些失真往往发生在图像边缘。传统的基于MSE的失真衡量方法对图像里的每一个像素同等对待,不能真实反应合成视点的质量,本文将更符合人眼视觉特性的结构相似性度量(Structural Similarityindex,SSIM)引入到深度编码中,进一步深入研究深度编码中的合成视点优化(Synthesized View Optimization, VSO)问题。本文首先建立了深度编码失真与基于SSIM的合成视点失真模型;将此模型应用到深度编码的率失真优化中,建立深度编码码率与合成视点失真之间的率失真模型;估计基于SSIM的感知拉格朗日参数,指导深度编码的最优模式选择。实验结果表明,本文所提出的基于SSIM的合成视点优化在率失真性能和主观质量上都要优于基于军方误差(Mean Sequare Error,MSE)和JM的合成视点优化方法。
     深度图在大部分区域是平滑的,仅仅在物体边缘位置存在不连续区域,因此深度图比一般的自然图像具有更强的空间相关性。本文针对深度图的这种特征提出了基于空域的深度上下采样编码方法。下采样能大大减少编码端的输入数据量,降低编码码率;但下采样会丢失深度的边缘细节信息,造成合成视点质量的下降。本文利用高分辨率图像与其对应的低分辨率图像之间的统计特征不变性,设计基于协方差估计的深度上采样模型;利用深度图的对应纹理图的边缘相似性设计自适应权重模型,使上采样系数自适应调整以保留深度图各个方向的边缘。
     本文工作是对基于深度的视频编码的探索和研究,为深度信息的发展和3D视频的应用提供了新的思路和解决方法。
3D video is an emerging new media for rendering dynamic real-world scenes. Comparedwith traditional2D video,3D video is the natural extension in the spatial-temporal domainas it provides the depth impression of the observed scenery. Besides the3D sensation,3Dvideo also allows for an interactive selection of viewpoint and view direction within thecaptured range. An attractive3D video representation is multi-view video plus depth(MVD) format. With the help of depth maps, many interesting applications such asglasses-free3D video, free-viewpoint television (FTV), and gesture/motion based humancomputer interaction are becoming possible. However, MVD results in a vast amount ofdata to be stored or transmitted, efficient compression techniques for MVD are vital forachieving high3D visual experience with constrained bandwidth. Consequently, efficientdepth coding is one of the key issues in3D video systems.
     Depth maps are used to synthesize the virtual view at the receiver side, so accurate depthmaps should be estimated in an efficient manner for ensuring a seamless view synthesis.Although the depth cameras provide depth data conveniently, depth maps from thestructured light cameras contains holes owing to their inherent problems. In this paper, wepropose a hybrid multi-scale hole filling method to combine the modeling strength of theparametric filter and nonparametric filter. We progressively recover the missing areas inthe scale space from coarse to fine so that the sharp edges and structure information in thefinest scale can be eventually recovered. For inter scale, this paper presents a novel linearautoregressive-based depth up-sampling algorithm considering the edge similarity betweendepth maps and their corresponding texture images as well as the structural similarityamong depth maps. For intra scale, this paper propose a weighted kernel filter for holefilling based on a weighted cost function determined by the joint multilateral kernel. Thismethod can remove artifacts, smoothing depth maps in homogeneous regions and improving the accuracy near object boundaries.
     A key observation is that depth map is encoded but not displayed; it is only used tosynthesize intermediate views. The distortion in the depth map will affect indirectly thesynthesized view quality, so the depth map coding aims to reduce a depth bit rate as muchas possible while ensuring the quality of the synthesized view. In this paper, we propose adepth map coding method based on a new distortion measurement by deriving therelationships between distortions in coded depth map and synthesized view. We firstanalyze the relationship between depth map distortion and geometry error by mathematicalderivation, and then set up a model to describe the relationship between geometry error andsynthesized view distortion. Based on the two mathematics relationships, the synthesizedview distortion due to depth map coding is estimated.
     More specifically, depth coding-induced distortion in synthesized view is always alongboundaries, which is significant for human eyes. However, mean squared error(MSE) andthe like that have been used as quality metrics are poorly correlated with human perception.In this paper, we apply the structural similarity information as the quality metric in depthmap coding. We develop a structural similarity-based synthesized view distortion (SS-SVD)model to capture the effect of depth map distortion on the final quality of the synthesizedviews. The model is applied to the rate distortion optimization which describes therelationship between depth coding bit-rate and synthesized view distortion for depth mapcoding mode selection. Experimental results show that the proposed SS-SVD methodobtains both better rate distortion performance and perceptual quality of synthesized viewsthan JM reference software.
     Depth maps generally have more spatial redundancy than natural images. This propertycan be exploited to compress a down-sampled depth map at the encoder. In this paper, wepresent an efficient down/up-sampling method to compress the depth map efficiently. Anovel edge-preserving depth up-sampling method is proposed by using both the texture anddepth information. We take into account the edge similarity between depth maps and their corresponding texture images as well as the structural similarity among depth maps tobuild a weight model. Based on the weight model, the optimal minimum mean square error(MMSE) up-sampling coefficients are estimated from the local covariance coefficients ofthe down-sampled depth map. The up-sampling filter is combined with HEVC to increasecoding efficiency. Objective results and subjective evaluation show our proposed methodachieves better quality in synthesized views than existing methods do.
引文
[1] A. Smolic, P. Kauff, S. Knorr, et al. Three-Dimensional Video Postproduction andProcessing. Proceedings of the IEEE,2011,99(4):607-625.
    [2] A. Gotchev, G. B. Akar, T. Capin, et al. Three-Dimensional Media for MobileDevices. Proceedings of the IEEE,2011,99(4):708-741.
    [3] D. K. Broberg. Infrastructures for Home Delivery, Interfacing, Captioning, andViewing of3-D Content. Proceedings of the IEEE,2011,99(4):684-693.
    [4] M. Tanimoto. FTV: Free-viewpoint Television. Signal Processing: ImageCommunication,2012,27(6):555-570.
    [5] L. Onural, A. Gotchev, H. M. Ozaktas, et al. A survey of signal processing problemsand tools in holographic three-dimensional television. IEEE Transactions on Circuitsand Systems for Video Technology,2007,17(11):1631-1646.
    [6] A. Vetro, T. Wiegand and G. J. Sullivan. Overview of the Stereo and MultiviewVideo Coding Extensions of the H.264/MPEG-4AVC Standard. Proceedings of theIEEE,2011,99(4):626-642.
    [7] T. Fujii, K. Mori, K. Takeda, K. Mase, et al. Multipoint Measuring System for Videoand Sound-100-camera and microphone system. in: ICME2006,2006.437-440.
    [8] Wilburn B, Joshi N, Vaish V, et al. High-speed videography using a dense cameraarray. in:2004IEEE Computer Society Conference on Computer Vision and PatternRecognition,2004.2: II-294-II-301.
    [9] W. Matusik and H. Pfister.3D TV: a scalable system for real-time acquisition,transmission, and autostereoscopic display of dynamic scenes. ACM Transactions onGraphics (TOG),2004,23(3):814-824.
    [10]L. Zhang, W. J. Tam and D. Wang. Stereoscopic image generation based on depthimages. in:2004International Conference on Image Processing (ICIP'04),2004.2993-2996.
    [11] L. Zhang and W. J. Tam. Stereoscopic image generation based on depth images for3D TV. IEEE Transactions on Broadcasting,2005,51(2):191-199.
    [12]L. Alvarez, R. Deriche, J. Sánchez, et al. Dense disparity map estimation respectingimage discontinuities: A PDE and scale-space based approach. Journal of VisualCommunication and Image Representation,2002,13(1):3-21.
    [13]A. S. Malik and T.-S. Choi. Consideration of illumination effects and optimization ofwindow size for accurate calculation of depth map for3D shape recovery. PatternRecognition,2007,40(1):154-170.
    [14]The leading provider of3D Time of Flight cameras. http://www.acroname.com/robotics/parts/R317-SR4000-CW.html.
    [15]Microsoft (2012). Kinect for Windows SDK1.6Programming Guide. Microsoft.Retrieved2013-02-16.
    [16]A. Smolic, K. Mueller, P. Merkle, et al. An overview of available and emerging3Dvideo formats and depth enhanced stereo as efficient generic solution. in: PictureCoding Symposium(PCS2009),2009.1-4.
    [17]A. Vetro. Frame compatible formats for3D video distribution. in:17th IEEEInternational Conference on Image Processing (ICIP),2010.2405-2408.
    [18]K. Muller, P. Merkle and T. Wiegand.3-D Video Representation Using Depth Maps.Proceedings of the IEEE,2011,99(4):643-656.
    [19]W. J. H. B. Bross, G. J. Sullivan, et al. High Efficiency Video Coding (HEVC) textspecification draft8. ITU-T SG16WP3and ISO/IEC JTC1/SC29/WG11, Doc.JCTVC-J1003, Stockholm, CE(2012).
    [20]P. Merkle, K. Muller, A. Smolic, et al. Efficient compression of multi-view videoexploiting inter-view dependencies based on H.264/MPEG4-AVC. in:2006IEEEInternational Conference on Multimedia and Expo,2006.1717-1720.
    [21]M. Lukacs. Predictive coding of multi-viewpoint image sets. in: Acoustics, Speech,and Signal Processing, IEEE International Conference on ICASSP'86.,1986.521-524.
    [22]I. h. Dinstein, G. Guy, J. Rabany, et al. On the compression of stereo images:Preliminary results. Signal Processing,1989,17(4):373-382.
    [23]D. Godard. Self-recovering equalization and carrier tracking in two-dimensional datacommunication systems. IEEE Transactions on Communications,1980,28(11):1867-1875.
    [24]P. Lai, A. Ortega, P. Pandit, et al. Focus mismatches in multiview systems andefficient adaptive reference filtering for multiview video coding. in: Proceedings ofthe SPIE-The International Society for Optical Engineering,2008.582214-1-12.
    [25]P. Merkle, A. Smolic, K. Muller, et al. Efficient prediction structures for multiviewvideo coding. Circuits and Systems for Video Technology, IEEE Transactions on,2007,17(11):1461-1473.
    [26]K. Han-Suh, J. Yong-Joon and J. Byeong-Moon. Motion information inferringscheme for multi-view video coding. IEICE transactions on communications,2008,91(4):1247-1250.
    [27]S. Yea and A. Vetro. View synthesis prediction for multiview video coding. SignalProcessing: Image Communication,2009,24(1):89-100.
    [28]Y. Morvan, D. Farin and P. H. de With. Joint Depth/Texture Bit-Allocation ForMulti-View Video Compression. in:26th Picture Coding Symposium,2007.
    [29]M. Zamarin, S. Milani, P. Zanuttigh, et al. A novel multi-view image coding schemebased on view-warping and3D-DCT. Journal of Visual Communication and ImageRepresentation,2010,21(5):462-473.
    [30]M. Maitre and M. N. Do. Depth and depth–color coding using shape-adaptivewavelets. Journal of Visual Communication and Image Representation,2010,21(5):513-522.
    [31]S.-Y. Kim and Y.-S. Ho.Mesh-based depth coding for3D video using hierarchicaldecomposition of depth maps. in: IEEE International Conference on ImageProcessing(ICIP2007),2007. V-117-V-120.
    [32]S. Yea and A. Vetro.Multi-layered coding of depth for virtual view synthesis. in:Picture Coding Symposium (PCS2009),2009.1-4.
    [33]P. Merkle, Y. Morvan, A. Smolic, et al.The Effect of Depth Compression onMultiview Rendering Quality. in:3DTV Conference: The True Vision-Capture,Transmission and Display of3D Video,2008.245-248.
    [34]P. Lai, A. Ortega, C. Dorea, et al. Improving view rendering quality and codingefficiency by suppressing compression artifacts in depth-image coding. in: Proc.SPIE,2009.72570O (10pp.).
    [35]W.-S. Kim, A. Ortega, P. Lai, et al. Depth map distortion analysis for view renderingand depth coding. in:200916th IEEE International Conference onImage Processing,2009.721-724.
    [36]Y. Liu, Q. Huang, S. Ma, et al. Joint video/depth rate allocation for3D video codingbased on view synthesis distortion model. Signal Processing: Image Communication,2009,24(8):666-681.
    [37]W.-S. Kim, A. Ortega, P. Lai, et al. Depth map coding with distortion estimation ofrendered view. SPIE Visual Information Processing and Communication,2010.75430B (10pp.).
    [38]D. Min, J. Lu and M. N. Do. Depth Video Enhancement Based on Weighted ModeFiltering. IEEE Transactions on Image Processing,2012,21(3):1176-1190.
    [39]K.-J. Oh, S. Yea, A. Vetro, et al. Depth reconstruction filter for depth coding.Electronics letters,2009,45(6):305-306.
    [40]S. Liu, P. Lai, D. Tian, et al. New depth coding techniques with utilization ofcorresponding video. IEEE Transactions on Broadcasting,2011,57(2):551-561.
    [41]K.-J. Oh, A. Vetro and Y.-S. Ho. Depth coding using a boundary reconstruction filterfor3-D video systems. IEEE Transactions on Circuits and Systems for VideoTechnology,2011,21(3):350-359.
    [42]K.-J. Oh, S. Yea, A. Vetro, et al. Depth reconstruction filter and down/up samplingfor depth coding in3-D video. IEEE Signal Processing Letters,2009,16(9):747-750.
    [43]H. Deng, L. Yu, J. Qiu, et al.A joint texture/depth edge-directed up-samplingalgorithm for depth map coding. in:2012IEEE International Conference onMultimedia and Expo (ICME),2012.646-650.
    [44]E. Ekmekcioglu, M. Mrak, S. Worrall, et al.Utilisation of edge adaptive upsamplingin compression of depth map videos for enhanced free-viewpoint rendering. in:200916th IEEE International Conference on Image Processing (ICIP),2009.733-736.
    [45]M. Wildeboer, T. Yendo, M. Panahpour Tehrani, et al.Depth up-sampling for depthcoding using view information. in:3DTV Conference: The True Vision-Capture,Transmission and Display of3D Video (3DTV-CON),2011.1-4.
    [46]I. Daribo, C. Tillier and B. Pesquet-Popescu. Motion vector sharing and bitrateallocation for3D video-plus-depth coding. EURASIP Journal on Applied SignalProcessing,2009.258920(13pp.).
    [47]S. Grewatsch and E. Miiller.Sharing of motion vectors in3D video coding. in:2004International Conference on Image Processing2004.3271-3274.
    [48]H. Yuan, Y. Chang, J. Huo, et al. Model-based joint bit allocation between texturevideos and depth maps for3-D video coding. IEEE Transactions on Circuits andSystems for Video Technology,2011,21(4):485-497.
    [49]G. Cheung, V. Velisavljevic and A. Ortega. On dependent bit allocation for multiviewimage coding with depth-image-based rendering. IIEEE Transactions on ImageProcessing,2011,20(11):3179-3194.
    [50]S. B. Kang.Survey of image-based rendering techniques. in: Electronic Imaging'99,1998.2-16.
    [51]P. E. Debevec, C. J. Taylor and J. Malik.Modeling and rendering architecture fromphotographs: A hybrid geometry-and image-based approach. in: Proceedings of the23rd annual conference on Computer graphics and interactive techniques,1996.11-20.
    [52]H.-Y. Shum and S. B. Kang. A review of image-based rendering techniques.IEEE/SPIE Visual Communications and Image Processing (VCIP),2000.2-13.
    [53]C. Fehn.Depth-image-based rendering (DIBR), compression, and transmission for anew approach on3D-TV. in: Electronic Imaging2004,2004.93-104.
    [54]Y. Mori, N. Fukushima, T. Yendo, et al. View generation with3D warping usingdepth information for FTV. Signal Processing: Image Communication,2009,24(1):65-72.
    [55]S. Zinger and L. Do. Free-viewpoint depth image based rendering. Journal of VisualCommunication and Image Representation,2010,21(5):533-541.
    [56]K.-J. Oh, S. Yea and Y.-S. Ho. Hole filling method using depth based in-painting forview synthesis in free viewpoint television and3-D video. in: Picture CodingSymposium (PCS2009),2009.1-4.
    [57]P. Merkle, Y. Morvan, A. Smolic, et al. The effects of multiview depth videocompression on multiview rendering. Signal Processing: Image Communication,2009,24(1):73-88.
    [58]Y.-m. Feng, D.-x. Li, K. Luo, et al. Asymmetric bidirectional view synthesis for freeviewpoint and three-dimensional video. IEEE Transactions on Consumer Electronics,2009,55(4):2349-2355.
    [59]K. Muller, A. Smolic, K. Dix, et al. Reliability-based generation and view synthesisin layered depth video. in:2008IEEE10th Workshop on Multimedia SignalProcessing,2008.34-39.
    [60]A. Smolic, K. Muller, K. Dix, et al. Intermediate view interpolation based onmultiview video plus depth for advanced3D video systems. in:15th IEEEInternational Conference on Image Processing (ICIP2008),2008.2448-2451.
    [61]L. Onural, F. Yaras and H. Kang. Digital Holographic Three-Dimensional VideoDisplays. Proceedings of the IEEE,2011,99(4):576-589.
    [62]S. Pastoor and M. W pking.3-D displays: A review of current technologies.Displays,1997,17(2):100-110.
    [63]H. Urey, K. V. Chellappan, E. Erden, et al. State of the Art in Stereoscopic andAutostereoscopic Displays. Proceedings of the IEEE,2011,99(4):540-555.
    [64]N. A. Dodgson. Autostereoscopic3D displays. Computer,2005,38(8):31-36.
    [65]T. Leyvand, C. Meekhof, Y.-C. Wei, et al. Kinect identity: Technology andexperience. Computer,2011,44(4):94-96.
    [66]K. Khoshelham. Accuracy analysis of kinect depth data. in: ISPRS workshop laserscanning,2011.1-6.
    [67]I. Oikonomidis, N. Kyriazis and A. Argyros. Efficient model-based3d tracking ofhand articulations using kinect. in: British Machine Vision Conference,2011.101.1-101.11.
    [68]E. C. Greene, M. H. Kass and G. S. Miller,"Rendering of3D scenes on a displayusing hierarchical z-buffer visibility," ed: Google Patents,1996.
    [69]L. Yu, S. Xiang, H. Deng, et al.Depth Based View Synthesis with Artifacts Removalfor FTV. in:2011Sixth International Conference on Image and Graphics,2011.506-510.
    [70]E. P. Simoncelli and B. A. Olshausen. Natural image statistics and neuralrepresentation. Annual review of neuroscience,2001,24(1):1193-1216.
    [71]A. Srivastava, A. B. Lee, E. P. Simoncelli, et al. On advances in statistical modelingof natural images. Journal of mathematical imaging and vision,2003,18(1):17-33.
    [72]X. Li and M. T. Orchard. New edge-directed interpolation. IEEE Transactions onImage Processing,2001,10(10):1521-1527.
    [73]X. Zhang, S. Ma, Y. Zhang, et al.Nonlocal edge-directed interpolation. in: Advancesin Multimedia Information Processing,2009.1197-1207.
    [74]X. Liu, D. Zhao, R. Xiong, et al. Image interpolation via regularized local linearregression. IEEE Transactions on Image Processing,2011,20(12):3455-3469.
    [75]X. Zhang and X. Wu. Image interpolation by adaptive2-D autoregressive modelingand soft-decision estimation. IEEE Transactions on Image Processing,2008,17(6):887-896.
    [76]H. Takeda, S. Farsiu and P. Milanfar. Kernel regression for image processing andreconstruction. IEEE Transactions on Image Processing,2007,16(2):349-366.
    [77]G. Zhai and X. Yang. Image Reconstruction from Random Samples with MultiscaleHybrid Parametric and Nonparametric Modeling. IEEE Transactions on Circuits andSystems for Video Technology,2012,22(11):1554-1563.
    [78]C. Tomasi and R. Manduchi.Bilateral filtering for gray and color images. in: SixthInternational Conference on Computer Vision,1998.839-846.
    [79]J. Kopf, M. F. Cohen, D. Lischinski, et al. Joint bilateral upsampling. ACMTransactions on Graphics,2007,26(3):96-1-15.
    [80]J.-Y. Bouguet. Camera calibration toolbox for matlab.2004,
    [81]S. Matyunin, D. Vatolin, Y. Berdnikov, et al. Temporal filtering for depth mapsgenerated by Kinect depth camera. in:3DTV Conference: The True Vision-Capture,Transmission and Display of3D Video (3DTV-CON),2011.1-4.
    [82]A. Tikanmaki, A. Gotchev, A. Smolic, et al.Quality assessment of3D video in rateallocation experiments. in: IEEE International Symposium on ConsumerElectronics(ISCE2008),2008.1-4.
    [83]T. F. M. Tanimoto, and K. Suzuki. View synthesis algorithm in view synthesisreference software3.0(VSRS3.0). Tech. Rep. Document M16090, ISO/IECJTC1/SC29/WG11, Feb.2009.
    [84]L.-T. Wang, N. E. Hoover, E. H. Porter, et al.SSIM: a software levelizedcompiled-code simulator. in: Proceedings of the24th ACM/IEEE DesignAutomation.,1987.2-8.
    [85]Z. Wang, A. C. Bovik, H. R. Sheikh, et al. Image quality assessment: From errorvisibility to structural similarity. IEEE Transactions on Image Processing,2004,13(4):600-612.
    [86]C.-l. Yang, H.-x. Wang and L.-M. Po. A novel fast motion estimation algorithmbased on SSIM for H.264video coding. Advances in Multimedia InformationProcessing,2007,168-176.
    [87]S. Wang, A. Rehman, Z. Wang, et al. SSIM-Motivated Rate-Distortion Optimizationfor Video Coding. IEEE Transactions on Circuits and Systems for Video Technology,2012,22(4):516-529.
    [88]A. Tikanmaki, A. Gotchev, A. Smolic, et al.Quality assessment of3D video in rateallocation experiments. in: IEEE International Symposium on Consumer Electronics,2008.1-4.
    [89]H.-P. Deng, L. Yu, B. Feng, et al. Structural similarity-based synthesized viewdistortion estimation for depth map coding. IEEE Transactions on ConsumerElectronics,2012,58(4):1338-1344.
    [90]S. Ma, W. Gao and Y. Lu. Rate-distortion analysis for H.264/AVC video coding andits application to rate control. IEEE Transactions on Circuits and Systems for VideoTechnology,2005,15(12):1533-1544.
    [91]Y.-H. Huang, T.-S. Ou, P.-Y. Su, et al. Perceptual rate-distortion optimization usingstructural similarity index as quality metric. IEEE Transactions on Circuits andSystems for Video Technology,2010,20(11):1614-1624.
    [92]S. Wang, A. Rehman, Z. Wang, et al.Rate-SSIM optimization for video coding. in:2011IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP),2011.833-836.
    [93]C. L. Zitnick, S. B. Kang, M. Uyttendaele, et al. High-quality video viewinterpolation using a layered representation. ACM Transactions on Graphics (TOG),2004,23(3):600-608.
    [94]L. Y. H. P. Deng, and Z. X. Xiong. Edge-preserving interpolation for down/upsampling-based depth compression. in Proc.2012IEEE International Conference onImage Processing (ICIP'12),2012.1301-1304.
    [95]M. Wildeboer, T. Yendo, M. P. Tehrani, et al.Color based depth up-sampling fordepth compression. in: Picture Coding Symposium,2010.170-173.
    [96]K. W. K. Klimaszewski, and M. Domanski. Influence of views and depthcompression onto quality of synthesized views. ISO/IEC JTC1/SC29/WG11,M16758, Jul.2009, London, UK.
    [97]Y. C. M. Hannuksela, and T. Suzuki,“AVC Draft Text3,” Joint Collaborative Teamon3D Video Coding Extension Development of ITU-T SG16WP3and ISO/IECJTC1/SC29/WG11Doc. JTC3V-A1002,1st Meeting: Stockholm, SE(2012).
    [98]K. Iourcha, J. C. Yang and A. Pomianowski.A directionally adaptive edgeanti-aliasing filter. in: Proceedings of the Conference on High Performance Graphics,2009.127-133.
    [99]J. Canny. A computational approach to edge detection. IEEE Transactions on PatternAnalysis and Machine Intelligence,1986,6:679-698.
    [100]D. T. Kuan, A. A. Sawchuk, T. C. Strand, et al. Adaptive noise smoothing filter forimages with signal-dependent noise. IEEE Transactions on Pattern Analysis andMachine Intelligence,1985,2:165-177.
    [101]J.-S. Lee. Digital image enhancement and noise filtering by use of local statistics.IEEE Transactions on Pattern Analysis and Machine Intelligence,1980,2):165-168.
    [102]P. P. Y. Chen, and S. Yea. WD4reference software for MVC. ISO/IECJTC1/SC29/WG11and ITU-T Q6/SG16, Doc. JVT-AD207, Jan.2009.
    [103] SVN repository for HTM4.1.
    [104]K. M. D. Rusanovskyy, and A. Vetro. Common Test Conditions of3DV CoreExperiments.Joint Collaborative Team on3D Video Coding Extension Developmentof ITU-T SG16WP3and ISO/IEC JTC1/SC29/WG11Doc.,2012, JTC3V-A1100,1st Meeting: Stockholm, SE..
    [105]S. Schwarz, R. Olsson, M. Sj str m, et al. Adaptive depth filtering for HEVC3Dvideo coding. in: Picture Coding Symposium (PCS),2012.49-52.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700