三维显示信息重建及评价方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
我们所处的世界是个真实的、立体的、三维的世界。从古时人们用绘画记录身边或头脑中的影像,到照相机的诞生,再到摄像机的诞生,人们对于记录的追求从未停止。当平面记录影像的壁垒被阿凡达给人们带来的震撼彻底撼动后,三维显示真正变得家喻户晓,成为显示领域新的大众化追求。
     三维显示技术的基本原理是利用人的双眼视差所观察到的左右眼不同的图像在大脑中形成立体视觉。因此,三维显示的核心技术其实就是如何产生左右眼图像与如何让观看者的左右眼只看到各自的图像。当左右眼同时观看物体时,因为左右眼双目视差(binocular parallax)的存在使得左右眼接收不同角度的图像,再经过大脑合成后使人们产生物体立体感。
     三维显示在21世纪伊始带给人们视觉震撼享受之后,人们迫切渴望能够更多的享受到三维显示带给他们的快乐。但是,大规模应用对于现阶段三维显示技术还不能实现,最大的原因即是三维显示片源严重不足。高精密度的三维摄像机虽然可以拍摄任何想要获取的三维场景,可用于电影电视制作,但是使用这种方法获取三维视频却有成本高,制作周期难的困扰,难以普及。同时,面对丰富并且制作简单的传统二维图像及视频,人们也希望能够重温同样内容下的三维震撼效果。三维显示信息重建的想法自此萌生。
     三维显示信息重建,即二维转三维,是通过软件算法的方式将二维的图像或视频转换成人们可以接受的具备三维显示效果的图像及视频。在转换的过程中,要使用到深度图像序列(简称深度图)信息。深度图是三维显示区别于二维显示的差异信息,幅二维图像如果有了深度信息的配合就可以通过基于深度图像绘制技术(depth image basedrendering-DIBR),将幅二维图像转换为三维图像。这种方法的出现在理论上提出了切实可行的突破目前制约三维显示产业发展瓶颈的方法。
     研究者们自21世纪起便开始了这方面的研究,也取得了些喜人的进展。本文在前人研究基础上,首先概要的回顾了三维显示的发展史。接下来介绍了本文研究所需要的背景知识:包括人眼立体视觉基本机理,三维显示技术原理和分类。在人眼立体视觉基本机理中分单目感知深度和双目视差感知深度两类介绍。在三维显示技术原理和分类中按照眼镜式三维显示技术、裸眼三维显示技术、广义三维显示技术的顺序详细总结了领域内现行有效的三维显示技术原理及分类。
     其次,全面的研究总结了现阶段产业领域内三维显示信息重建技术研究现状和所面临的问题。指出了三维显示信息重建技术的现实意义。基于对比分类的方法明确提出了三维显示信息重建(二维转三维)技术在全部的三维显示视频获取方法中的地位及意义。介绍了深度图的概念,详细描述了基于深度图的三维图像渲染技术(DIBR)。并站在二维转三维技术实际需要的角度,分析了二维转三维所需要的视频处理工具及研究现状。其中包括图像边缘检测算法、图像分割算法、模式识别、视频跟踪等。
     遵循基于深度图渲染三维图像的逻辑,本文第三部分展开了探索工作,提出了套二维图像转换三维图像的视频处理系统。首先对图像进行预处理,包括二维图像获取、双边滤波。实验证明,本文应用双边滤波的方法对二维图像进行预处理有效的降低了二维转三维步骤的错误率。接下来,我们使用K-means聚类算法对预处理过后的图像进行分类,从而得到前景背景分离的等待赋予深度的图像。为了更准确的对经过聚类算法处理后的图像像素赋予深度,我们提出了基于经验性的图像布局模型,用于对图像中各目标区别性的做出深度赋值。实验证明,所提方式探索了种图像二维转三维的可行性方法,并为之后的研究奠定了定的理论基础。
     我们还提出了种具有良好交互操作性的、半自动的基于二维视频获取三维视频关键信息的系统,在系统中,提出了基于目标跟踪算法完成非关键帧的深度图自动生成算法。系统可用于生成三维视频所需的深度信息。首先将视频提取关键帧,对关键帧进行基于懒惰抠图(lazy snapping)算法的图像分割。之后交互的将关键帧的主要目标进行标记,标记目标的深度信息随之赋值。对于非关键帧,根据关键帧的深度图,使用我们所提出的基于目标跟踪的非关键帧图像深度自动生成算法自动化的生成。最后,二维视频序列所对应的深度图像序列全部生成。我们在章节末尾提出了种二维转三维的系统框架。实验表明,我们提出的基于目标跟踪的非关键帧图像深度自动生成算法处理结果正确,能够生成较好的二维图像对应的深度图。
     我们在研究二维转三维算法及系统的同时,考虑到目前三维显示产业尚属推广阶段,研究并提出了全面的立体电视图像质量测试方法。鉴于目前产业发展阶段实际需要,主要针对眼镜式三维显示提出平板立体电视机及平板立体显示器的立体图像质量的测量条件和测量方法。我们从工作条件、结果的表示、环境条件、电源、稳定时间、测试场地、测量位置、信号格式、测试信号、测量点和测量项目、测试仪器、标准测试状态等方面对立体电视图像质量的测试要求做出了全面的研究与规定。并提出了标准的测量方法用于参考。研究成果已经形成行业标准文本。
     此外,我们基于测量不确定度的概念,对该领域内公认的能够产生接近标准的视频信号的广播电视测试信号发生器(SFU)进行了关于机顶盒电平指标测试的测量不确定度研究。提出了广播电视测试信号发生器的A类、B类测量不确定度评定方法。所提出的评价方法及实验所得结果可作为立体电视图像质量评价的测量不确定度评估依据。
     最后结合产业应用实际,为了适应产业发展需求,提出了家庭网络系统体系结构和参考模型。适用于有三维显示及高清互动诉求的家庭或类似的室内场所的网络构建。并提出了种基于家庭内部电力线载波的网络融合解决方案。相关成果已形成国家标准报批。
The world we live in is a real, three-dimensional (3D) space. From ancientpainting recording image around or in mind, to the birth of a camera and videotape recorder, people never stopped pursuing more vivid and lively record ofthe world. When "Avatar" shock up the notion of traditional2-D images,3-Ddisplay became a household name and a new popular pursuit in the displayarea.
     After enjoying the stunning visual shock of3-D images at the beginningof the21st century, people are eager to get more. However, large scale use of3-D display is still a challenge by now. Scarcity of3-D film sources is the mostimportant reason. The high-precision3-D camera can shoot any scene and usedfor3-D film and television production, but it is hard to widely spread due tohigh cost and long production cycle. At the same time, facing abundant andeasy making2-D images and videos, people also want to relive the samecontent under the3D shock effect. Therefore, the technology of3-Dinformation reconstruction came out.
     The3-D display information reconstruction technology, i.e.2D-3Dconversion, is to convert2-D images or videos to acceptable3-D display ofimages and video through software algorithm. In the conversion process, thedepth of the image sequence (i.e."depth map") will be used. Depth map isinformation gap between2-D display and3-D display. When adding depth map,a2-D image will be able to convert to3-D image by using renderingtechnology based on depth (DIBR-depth image based rendering).Theoretically,this is a practical approach to breakthrough the bottleneck of3D industrydevelopment–lack of sources.
     Related research started since the very beginning of21th century. Thereare also many encouraging research progress in this field. This paper, inChapter1, reviews the previous research of3-D technology, then introducesbackground knowledge of the research methods in our work, included basicprinciple of human eyes vision, principle and classification of3D displaytechnology. There are single eye depth perception and binocular parallax depth perception. We make a detailed analysis from three aspects, glassed typed3-Ddisplay, naked eyes typed3-D display and generalized3-D display;
     In Chapter2, this paper summarizes current research status of3-D displayinformation reconstruction and industry problems at this stage;meanwhile,points out the important status and realistic significance of3-D displayinformation reconstruction techniques in whole3-D video capture process.This chapter introduces concept of depth map, describes3-D image renderingtechniques based on depth map (DI BR) in detail, and analyzes current videoprocessing tools and research status in2D-3D conversion, including imageedge detection algorithm, image segmentation algorithms, pattern recognitionand video tracking.
     Following the logic of depth map based3-D rendering, Chapter3describeour exploration of a new2D-3D conversion system. The first step ispre-processing, including2-D image acquisition and bilateral filtering.Experiments show that bilateral filtering method we use in this paper for2-Dpreprocessing effectively lowered the error rate in2D-3D conversion. Afterthat, we classify objects in the image which has been pro-processed usingk-means clustering algorithm, and then image which foreground andbackground is divided and waiting for assigning depth value would be gotten.In order to assign depth value to the image more accurately, we propose layoutmodel of image which has been processed using k-means clustering algorithm.Experiment results shows that the method we proposed has explored anavailable2D to3D method, and laid the groundwork for future research.
     We also propose a semiautomatic, interoperability system through whichkey information of3D video from2D video can be captured. In the system,automatic generation method of non-key frames depth map base on objecttracking algorithm has been proposed. The system can be used to generatedepth information which generation of3D video required. First, get key framefrom video, do image segmentation based lazy snapping algorithm on keyframe, then label main object in key frame interactively, depth value will begiven after that. For non-key frames, according to depth map of key frames, weuse generation method of non-key frames depth map base on object trackingalgorithm which we proposed to generate depth map of non-key frame automatically. Finally, depth image sequence which corresponds to2D videosequence has been generated. We propose2D to3D system framework at theend of this chapter. The experimental results show that the generation methodof non-key frames depth map base on object tracking algorithm which wepropose can get correct result, depth map generation using this algorithm hasgood performance.
     While making deep research of2D to3D algorithms and system,considering3D display industry is still at the stage of promotion, we proposecomprehensive method of measurement for the image quality of3DTV. In viewof actual needs of this industrial development stage, we propose measurementconditions and measurement method for quality of flat panel stereo TV and flatpanel stereo terminal in the type of glassed3D display. We makecomprehensive research and rule on image quality of3DTV measurementrequirement. Include following aspects, working conditions, environmentalpresentation, power, stable duration, measurement site, measurement position,format of test signal, measurement points and measurement points,measurement instruments, standard test status. And we raise reference standardmeasurement methods. Research results have formed the industry standardtext.
     In addition, based on the concept of measurement uncertainty, this paperstudies the measurement uncertainty of set-top box’s electrical level test byusing SFU (Single Family Unit)-one kind of broadcast test system instrumentwhich is widely considered can generate more-or-less standard video signals.We propose evaluation methods aim at A-class and B-class measurementuncertainty respective. The evaluation methods and the experimental result canbe applied to evaluate image quality of3-D television.
     At last, we combined with the actual industry application experience,designs a family network system architecture and reference model which ismore adaptable in real family network construction scenarios in demanding of3-D display and high definition interactions. We also proposed a networkconvergence solution based on indoor Power-Line Carrier (PLC) network,which is already in approval process for national standards.
引文
[1] OKASHI T. Three dimensional imaging techniques[M]. New York:Academic Press,1976.
    [2] DODGSON N A. Autostereoscopic3D displays[J]. Computer,2005,38(8):31-36.
    [3]王琼华,王爱红.三维立体显示综述[J].计算机应用,2010,3:580-588.
    [4]剑飞,刘永进.三维立体显示综述[J].系统仿真学报,2008,9:132-135.
    [5]王甫,李其芳.我国3D电视发展现状、困境及对策探析[J].传媒观察,2012,9:1-4.
    [6] C.BAO,Y.WU,H.LING and H. JI. Real time robust L1trackerusing accelerated proximal gradient approach[C]. IEEE Conf.on Computer Vision and Pattern Recognition (CVPR), RhodeIsland,2012.
    [7] X. MEI and H. LING. Robust visual tracking and vehicleclassification via sparse representation[J]. IEEE Trans. onPattern Analysis and Machine Intelligence (PAMI),2011,33(11):2259—2272.
    [8] F. TANG, S. BRENNAN, Q. ZHAO, H. TAO. Co-tracking usingsemi-supervised support vector machines[C], IEEE ICCV2007.
    [9]唐丹,胡晓东.3D电视技术与发展探讨[J].产业运营,2012,12:38-39.
    [10]王永,孙可,孙士祥.3D显示技术的现状及发展[J].现代显示,2012,2:26-29.
    [11]刘进军.多1D,大不同—浅谈立体电视的技术特点与发展[J].卫星电视与宽带多媒体,2009,14:35-43.
    [12]董芳菲,黄心渊.面向裸眼立体电视的体视动画技术综述[J].电视技术,2011,22(35):32-37.
    [13] B. BABENKO, M-H YANG, S. BELONGIE. Visual tracking withonline multiple instance learning[J]. IEEE CVPR2009,6.
    [14] J. SHI and J MALIK. Normalized cuts and image segmentation[J].PAMI,2000.
    [15] S. MAJI, N. VISHNOI and J. MALIK. Biased normalized cut[J].CVPR2011.
    [16] P. ARBELAEZ, M. MAIRE, C. FOWLKES and J. MALIK. Contourdetection and hierarchical image segmentation[J]. PAMI,2011.
    [17]侯春萍,杨蕾,宋晓炜,戴居丰.立体电视技术综述[J].信号处理,2007,10:729-736.
    [18]朝祥,德萍.立体电视掀起视觉革命[J].资源与人居环境,2009,11:76-78.
    [19]赵兴涛.裸眼3D—3D电视技术发展的必然归宿[J].卫星电视与宽带多媒体,2011,2:25-29.
    [20] Q. WEI. Converting2D to3D: A Survey[J]. TU Delft,2005.
    [21] TANAKA M, ANTHONY L, KANEEDA T, HIROOKA J. A single solutionmethod for converting2D assembly drawings to3D partdrawings[J]. Computer Aided Design,2007,7:723-734.
    [22] CHEN, Z; PERNG, DB. Automatic reconstruction of3D objectsfrom2D orthographic views[P]. Pattern Recognition,1988,21:439-449.
    [23] DUTTA, D; SRINIVAS, YL. Reconstruction of curved solids from2polygonal orthographic views[J]. Computer-Aided Design,1992,24:149-159.
    [24] GUJAR, UG; NAGENDRA, IV. Construction of3D solid objectsfrom orthographic views[J]. Computers&Graphics,1989,13:505-521.
    [25] IDESAWA, M. A system to generate a solid figure from threeview[J]. Bulletin of the Japan Society of MechanicalEngineers,1973,2:216-225.
    [26] MASUDA, H. NUMAO, M. A cell based approach for generatingsolid objects from orthographic projections[J].Computer-Aided Design,1997,3:177-187.
    [27] NAGENDRA, IV; GUJAR, UG.3-D objects from2-D orthographicviews-a survey[J]. Computers&Graphics,1988,12:111-114.
    [28] SAKURAI, H. GOSSARD, D.C. Solid model input throughorthographic views[C]. USA: SIGGRAPH,1983.
    [29] TANAKA M, IWAMA K, HOSODA, A. Decomposition of a2D assemblydrawing into3D part drawings[J]. Computer-Aided Design,1998(30):37-46.
    [30]杨然.浅谈立体电视及立体显示技术的发展[J].网络技术,2005,8:71-73.
    [31] DANIEL J. SANDIN, TODD MARGOLIS, JINGHUA GE, JAVIER GIRADO,TOM PETERKA, THOMAS A. DEFANTI. The varrierTMautostereoscopic virtual reality display[C]. Los Angeles:ACM Transactions on Graphics, Proceeding of ACM SIGGRAPH,2005:894-903.
    [32]刘明,朱莹莹.浅议偏振技术在3D液晶显示中的应用[J].通信与广播电视,2012,4:36-43.
    [33]采建中.三维电视测量[J].光学精密工程,1994,10:1-4.
    [34] WESLEY, MA; MARKOWSKY, G. Fleshing out projections[J]. IBMjournal of research and development,1981(25):934-954.
    [35]胡道宁,高涵.3D电视发展困境探析[J].传播与技术,2011,9:155-156.
    [36]3D电视技术动态实用影音技术[J].实用影音技术,2010,11:36-43.
    [37]刘明,蔡永立.浅析立体电视技术及其发展和应用[J].通信与广播电视,2010, CAO XUN, BOVIK AC, WANG YAO, DAI QIONGHAI.Converting2D video to3D: an efficient path to a3Dexperience[J]. IEEE multimedia,2011,10-12:12-17.
    [38] CAO XUN, LI ZHENG, DAI QIONGHAI. Semi-automatic2D-to-3Dconversion using disparity propagation[J]. IEEEtransactions on broadcasting,2011,6:491-499.
    [39] FEHN C. WOODS. AJ, MERRITT JO, BENTON, SA. Depth-image-basedrendering (DIBR), compression and transmission for a newapproach on3D-TV[C]. Conferene on Stereoscopic Displays andVirtual Reality Systems XI, San Jose, CA:2004JAN19-22.
    [40] Proceedings of the society of photo-optical instrumentationengineers(SPIE), Stereoscopic displays and virtual realitysystems XI[G].2004:93-104.
    [41] MEESTERS LMJ, IJSSELSTEJIM WA, SEUNTIENS PJH. A survey ofperceptual evaluations and requirements ofthree-dimensional TV[J]. IEEE transactions on circuits andsystems for video technology,2004,3:381-391.
    [42] SHEIKH, HAMID RAHIM, SABIR, MUHAMAD FAROOQ, BOVIK, ALANCONRAD. A statistical evaluation of recent full referenceimage quality assessment algorithms[J]. IEEE transactionson image processing,2006,11:3440-3451.
    [43] TOMASI, C. MANDUCHI, R. Bilateral Filtering for Gray andColor Images[C]. Proc. IEEE Int'l Conf. Computer Vision,1998:839-846.
    [44] WOODS, A; DOCHERTY, T; KOCH, R. Image distortions instereoscopic video systems[C]. Conf. on stereoscopicdisplays and applications,1993, FEB1-2.
    [45] NGO VIETANH, CAI JIANFEI. Converting2D Soccer Video to3DCartoon[C].200810th international conference on controlautomation robotics&vision: ICARV2008,:103-107.
    [46] S. YANO, I. YUYAMA.Stereoscopic HDTV: experimental systemand psychological effects[J].SMPTE,1991(100):14-18.
    [47] N. S. HOLLIMAN, N. A. DODGSON and G. FAVALORA.Three-dimensional display technologies: an analysis oftechnical performance characteristics[J]. IEEE Trans.Broadcast,2011.
    [48] H. TILTON. Broadcast standards for3D HDTV[J]. Int. Symp.3–D Image Technol. Arts, Tokyo,1992,2:187–191.
    [49] P. V. HARMAN. An architecture for digital3D broadcasting[C].SPIE Conf. Stereoscopic Displays Virtual Reality Syst. VI,San Jose, California, Jan.1999, vol.3639:254–259.
    [50] P. V. HARMAN. Home-based3D entertainment-an overview[C].IEEE Int. Conf. Image Process., Vancouver,2000:1-4.
    [51] W. J. TAM, L. ZHANG.3D-TV Content Generation:2D-To-3DConversion[J]. IEEE Int. Conf. Multimedia Expo, Toronto, ON,Jul.9–12,2006:1869-1872.
    [52] P.HARMAN, J. FLACK, S. FOX, M. DOWLEY. Rapid2D to3Dconversion[C]. SPIE Conf. Stereoscopic Displays and VirtualReality Systems IX,2002, vol.4660:78–86.
    [53] G. IDDAN. G. YAHAV.3D imaging in the studio[C]. SPIE Conf.3–D Image Capture Appl. IV,2001, vol.4298:48–55.
    [54] L. ZHANG, J. TAM, D.WANG. Stereoscopic image generationbased on depth images[C]. IEEE Conference on ImageProcessing, Singapore, Oct.2004:2993–2996.
    [55] O. GRAU, M. PRICE, G. A. THOMAS. Use of3-D techniques forvirtual production[J]. Proc. SPIE,2001,4309:40-50.
    [56] S. C. CHAN, H.Y. SHUN, K.T. NG. Image-based rendering andsynthesis[J]. IEEE Signal Process. Magazine,2007(11):22–33.
    [57] C. LIANG, K. K. WONG.3D reconstruction using silhouettesfrom unordered viewpoints[J]. Image Vis. Computing,2010,28(3):579–589.
    [58] J. F. BLINN, M. E. NEWELL. Texture and reflection in computergenerated images[J]. Commun. ACM,1976,19(10):542–547.
    [59] M. LEVOY, P. HANRAHAN. Light field rendering[J]. Proc.Siggraph,1996(8):31–42.
    [60] S.J. GORTLER, R. GRZESZCZUK R. SZELISKI, M. F. COHEN. Thelumigraph[J]. Proc. Siggraph,1996(8):43–54.
    [61] C. FEHN. Depth-image-based rendering (DIBR), compressionand transmission for a new approach on3D-TV[C]. CA, XI: SPIEConf. Stereoscopic Displays Virtual Reality System,2004.
    [62] J. SHADE, S. GORTLER, L. HE, R. SZELISKI. Layered depthimage[J]. Proc. SIGGRAPH,1998(11):231-242.
    [63] C. L. ZITNICK, S. B. KANG, M. UYTTENDELE, S. WINDER, R.SZELISKI. High-quality video view interpolation using alayered representation[J]. ACM Trans. Graphics,2004,23(3):600–608.
    [64] L. ZHANG, D.WANG, and A. VICENT. Adaptive reconstruction ofintermediate views from stereoscopic images[J]. IEEE Trans.Circuits Syst.Video Technol,2006,16(1):102-113.
    [65] A. REDERT, M. OP de BEECK, C. FEHN, W. IJSSELSTEIJN, M.POLLEFEYS, L.VAN GOOL, E. OFEK. SEXTON, P. SURMAN. ATTEST–advanced three-dimensional television systemtechniques[J]. Proc.3DPVT, Padova, Italy,2002(6):313–319.
    [66] Q. WEI. Converting2D to3D: A survey. Delft University ofTechnology. The Netherlands. Project Report,2005.
    [67] Tutorial: Radar and stereoscopy visual ability in3-D depthperception. Canada Centre for Remote Sensing [Online].Available:http://cct.rncan.gc.ca/resource/tutor/stereo/chap2/chapter2_5_e.php.
    [68] J. ENS, P. LAWRENCE. An investigation of methods ofdetermining depth from focus. IEEE Trans. Pattern Anal. Mach.Intell[J].1993,15(2):523–531.
    [69] J. KO, M. KIM, C. KIM.2D-to-3D stereoscopic conversion:depth-map estimation in a2D single-view image[C]. Proc.SPIE, c2007,6696.
    [70] G. GUO, N. ZHANG, L. HUO, W. GAO.2D to3D conversion basedon edge defocus and segmentation[C]. IEEE Int. Conf. Acoust.Speech Signal Process. c2008:2181–2184.
    [71] A. P. PENTLAND. A new sense for depth of field[J]. IEEE Trans.Pattern Anal. Mach. Intell.1987,9:523–531.
    [72] J. H. ELDER, S. W. ZUCKER. Local scale control for edgedetection and blur estimation[J]. IEEE Trans. Pattern Anal.Mach. Intell.1998,20(7):699–716.
    [73] S. A. VALENCIA, R. M. RODRIGUEZ-DAGNINO. Synthesizing stereo3D views from focus cues in monoscopic2D images[C]. Proc.SPIE, c2003,5006:377–388.
    [74] S. BATTIATO, S. CURTI, M. LA CASCIA, M. TORTORA, E. SCORDATO.Depth map generation by image classification[C]. Proc. SPIE.c2004,5302:95–104.
    [75] X. HUANG, L.WANG, J. HUANG, D. LI, M. ZHANG. A depthextraction method based on motion and geometry for2D to3Dconversion[J].3rd Int. Symp. Intell. Inf. Technol. Appl.2009:294–298.
    [76] Y. J. JUNG, A. BAIK, D. PARK. A novel2D-to-3D conversiontechnique based on relative height-depth-cue[C]. SPIE Conf.Stereoscopic Displays Appl. San José, CA. c2009,7237.
    [77] K. YAMADA, K. SUEHIRO, H. NAKAMURA. Pseudo3D imagegeneration with simple depth models[C]. Int. Conf. Consum.Electron. Dig. Techn. Papers. c2005:277–278.
    [78] D. A. FORSYTH. Shape from texture without boundaries[C].ECCV c2002:225–239.
    [79] A. TORRALBA, A. OLIVA. Depth estimation from imagestructure[J]. IEEE Trans. Pattern Anal. Mach. Intell.2002,24(9).
    [80] A. M. LOH, R. HARTLEY. Shape from non-homogeneous,non-stationary anisotropic, perspective texture[C]. Proc.British Mach. Vis.Conf. c2005.
    [81] F. COZMAN, E. KROTKOV. Depth from scattering[C]. IEEEConf.Comput. Vis. Pattern Recog.(CVPR). c1997:801–806.
    [82] W. J. TAM, C. VáZQUEZ, F. SPERANZA.3D-TV: A novel methodfor generating surrogate depth maps using colourinformation[C]. SPIE Conf. Stereoscopic Displays Appl. Conf.San José, c2009,7237.
    [83] C. VáZQUEZ, W. J. TAM. CRC-CSDM:2D to3D conversion usingcolour-based surrogate depth maps[C]. Int. Conf.3D Syst.Appl.(3DSA), Tokyo, Japan. c2010.
    [84] R. ZHANG, P. S. TSAI, J. E. CRYER, M. SHAH.Shape-from-shading: A survey[J]. IEEE Trans. Pattern Anal.Mach. Intell.1999,21(8):690–706.
    [85] E. RUBIN. Figure and ground. Visual PerceptionPhiladelphia[C], S.Yantis, Ed. London, U.K. Psychology,c2001:225–229.
    [86] W. J. TAM, A. S. YEE, J. FERREIRA, S. TARIQ, F. SPERANZA.Stereoscopic image rendering based on depth maps createdfrom blur and edge information[J]. Proc. Stereoscopic Disp.Appl. c2005,5664:104–115.
    [87] J. KIM, A. BAIK, Y. J. JUNG, D. PARK.2D-to-3D conversionby using visual attention analysis[J]. Proc. SPIE2010,7524.
    [88] B. J. ROGERS, M. E. GRAHAM. Motion parallax as an independentcue for depth perception[J]. Perception,1979,8:125–134.
    [89] E. ARCE, J. MARROQUIN. High-precision stereo disparityestimation usingHMMFmodels[J]. Image Vis. Comput.2007,25(5):623–636.
    [90] N. ATZPADIN, P. KAUFF, O. SCHREER. Stereo analysis by hybridrecursive matching for real-time immersive videoconferencing[J]. IEEE Trans. Circuits Syst. Video Technol.2004,14:321–334.
    [91] D. SCHARSTEIN, R. SZELISKI. A taxonomy and evaluation ofdense two-frame stereo correspondence algorithms[C]. Int.J. Comput. Vis.c2002,47:7–42.
    [92] T. JEBARA, A. AZARBAYEJANI, A. PENTLAND.3D structure from2D motion[C]. IEEE Signal Process. Magaz. c1999,16(3):66–84.
    [93] R.I. HARTLEY, A. ZISSERMAN. Multiple View Geometry inComputer Vision[J].Second ed. Cambridge, U.K. CambridgeUniv. Press,2004.
    [94] E. IMRE, S. KNORR, A. A. ALATAN, T. SIKORA. Prioritizedsequential3D reconstruction in video sequences of dynamicscenes[C]. IEEE Int.Conf. Image Process.(ICIP). Atlanta,GA. c2006.
    [95] I. IDESES, L. P. YAROSLAVSKY, B. FISHBAIN. Real-time2D to3D video conversion[J]. J. Real-Time Image Process.2007,2(1):3–7.
    [96] M. T. POURAZAD, P. NASIOPOULOS, R. K. WARD. Generating thedepth map from the motion information of H.264-encoded2Dvideo sequence[J]. Image Video Process. Article ID108584,2010.
    [97] G. ZHANG, J. JIA, T. T. WONG, H. BAO. Consistent depth mapsrecovery from a video sequence[J]. IEEE Trans. Pattern Anal.Mach. Intell.2009,31(6):974–988.
    [98] K. MOUSTAKAS, D. TZOVARAS, M. G. STRINTZIS. Stereoscopicvideo generation based on efficient layered structure andmotion estimation from a monoscopic image sequence[J]. IEEETrans. Circuits Syst. Video Technol.2005,15(8):1065–1073.
    [99] P. GARGALLO, P. F. STURM. Bayesian3D modeling from imagesusing multiple depth maps[C]. IEEE Conf. Comput. Vis.Pattern Recog. c2005,2:885–891.
    [100] P. H. S. Torr. Bayesian model estimation and selection forepipolar geometry and generic manifold fitting[J].TechReport MSR-TR-2002-29,2002.
    [101] L. ALVAREZ, R. DERICHE, T. PAPADOPOULO, J. SáNCHEZ.Symmetrical dense optical flow estimation with occlusionsdetection[C]. Int. J. Comput. Vis.2007,75(3):371–385.
    [102] J. SHI, C. TOMASI. Good features to track[C]. IEEE Int. Conf.Comput. Vis. Pattern Recog. c1994.
    [103] D. Y. KIM, D. B. MIN, K. SOHN. A stereoscopic video generationmethod using stereoscopic display characterization andmotion analysis[J]. IEEE Transactions on Broadcasting,2008,54(6):188-197.
    [104] C. H. CHOI, B. H. KWON, M. R. CHOI. A Real-timefield-sequential stereoscopic image converter[J].IEEETransactions on Consumer Electronics,2004,50:903-910.
    [105] S. U. YOON, S. Y. KIM, Y. S. HO. A framework for multi-viewvideo coding using layered depth images[J]. Lecture Notesin Computer Science,2005,3767:431-442.
    [106] S. Y. KIM, S. B. LEE, Y. S. HO. Three-dimensional naturalvideo system based on layered representation of depthmapsp[J]. IEEE Transactions on Consumer Electronics,2006,52:1035-1042.
    [107] C. FEHN. Depth-Image-Based rending(DIBR), compression andtransmission for a new approach on3D-TV[C]. Philadephia,PA, USA: Proceedings of SPIE Three-Dimensional TV, Video andDisplay III, c2004.
    [108] W. J. TAM, L. ZHANG.3D-TV content generation:2D-3Dconversion[C]. IEEE International Conference on Multimediaand Expo, c2006:1869-1872.
    [109] W. Y. CHEN, Y. L. CHANG, S. F. LIN, L. F. DING, L. G. CHEN.Efficient depth image based rendering with edge dependentdepth filter and interpolation[C]. IEEE InternationalConference on Multimedia and Expo, ICME, c2005:1314-1317.
    [110] J. Y. CHANG, C. C. CHENG, S. Y. CHIEN, L. G CHEN. Relativedepth layer extraction for moboscopic video by use ofmultidimensional filter[C]. IEEE International Conferenceon Multimedia and Expo, c2006:221-224.
    [111] Y. L. CHANG, C. Y. FANG, L. F. DING, S. Y. CHEN, L. G. CHEN.Depth map generation for2D-to-3D conversion by short-termmotion assisted color segmentation[C]. IEEE InternationalConference on Multimedia and Expo, c2007:1958-1961.
    [112] S. BATTIATO, S. CURTI, M. L. CASCIA, M. TORTORA, E. SCORDATO.Depth-map generation by image classification[J]. Proc. SPIE5302,2004:95.
    [113] J. PARK, C. KIM. Extracting focused object from lowdepth-of-field image sequences[C]. San Jose: Proc. SPIEVisual Communications and Image Processing, c2006.
    [114] Y. M. TSAI, Y. L. CHANG, L. G. CHEN. Block-based vanishingline and vanishing point detection for3D scenereconstruction[C]. Tottori, Japan: International Symposiumon Intelligent Signal Processing and Communication Systems,2006.
    [115] C. TOMASI, R. MANDUCHI. Bilateral filtering for gray andcolor images[C]. Proceedings of the IEEE InternationalConference on Computer Vision, c1998:839–846.
    [116] F. DURAND, J. DORSEY. Fast bilateral filtering for thedisplay of high-dynamic-range images[J]. ACM Transactionson Graphics,2002,21(7).
    [117] J. B. MACQUEEN. Some Methods for classification and analysisof multivariate observations[C]. Proceedings of5thBerkeley Symposium on Mathematical Statistics andProbability.1:281-297.
    [118] E. ROTEM, K. WOLOWELSKY, D. PELZ. Automatic video tostereoscopic video conversion[C]. Proc. SPIE, c2005,5664.
    [119] M. KIM, S. PARK, H. KIM, I. ARTEM. Automatic conversion oftwo-dimensional video into stereoscopic video[C]. Proc.SPIE, c2005,6016.
    [120] P. V. HARMAN, J. FLACK, S. FOX, M. DOWLEY. Rapid2D-to-3Dconversion[C]. Proc. SPIE, c2002,4660.
    [121] A. SAXENA, M. SUN, A. Y. NG. Learning3-D scene structurefrom a single still image[J]. IEEE Trans. Pattern Anal. Mach.Intell.,2009:824–840.
    [122] L. ZHANG, CARLOS VáZQUEZ, SEBASTIAN KNORR.3DTV ContentCreation: Automatic2D-to-3DVideo Conversion[J]. IEEETrans.Broadcasting,2011,57.
    [123] M. GOESELE, N. SNAVELY, B. CURLESS, H. HOPPE, S. M. SEITZ.Multiview stereo for community photo collections[C]. Int.Conf. Comput. Vis.(ICCV), c2007.
    [124] D. G. LOWE. Object recognition from local scale-invariantfeatures[C]. Int. Conf. Comput. Vis.(ICCV), c1999.
    [125] CHAO-CHUNG CHENG, CHUNG-TE LI, PO-SEN HUANG, TSUNG-KAI LIN,YI-MIN TSAI, LIANG-GEE CHEN. A Block-based2D-to-3DConversion System with Bilateral Filter[C].2009Digest ofTechnical Papers International Conf. Consu. Elec.(ICCE),c2009:10-14.
    [126] PHIL HARMAN, JULIEN FLACK, SIMON FOX, MARK DOWLEY. Rapid2Dto3D Conversion[C].13th Annual Stereoscopic Displays andApplications Conference/9th Annual Engineering Reality ofVirtual Reality Conference, c2002,4660:78-86.
    [127] M. POLLEFEYS, L. VAN GOOL, M. VERGAUWEN, F. VERBIEST, K.CORNELIS, J. TOPS, R. KOCH. Visual modeling with a hand-heldcamera[C]. Int. J. Comput. Vis., c2004,59.
    [128] Y. BOYKOV, O. VEKSLER, R. ZABIH. Fast approximate energyminimisation via graph cuts[J]. IEEE Trans. Pattern Anal.Mach. Intell.,2001,29:1222-1239.
    [129] M. KIM, M. SONG, D. KIM, K. CHOI. Stereoscopic conversionof monoscopic video by the transformation of vertical tohorizontal disparity[J]. Proc. SPIE,1998,3295.
    [130] F. XU, G. ER, X. XIE, Q. DAI.2D-to-3D conversion based onmotion and color mergence[C]. IEEE3DTV Conf., c2008.
    [131] C.C. CHENG, C.T. LI, Y.M. TSAI, L.G. CHEN. Hybrid depthcueing for2D-to-3D conversion system[C]. Proc. SPIE, c2009,7237.
    [132] D. KIM, D. MIN, K. SOHN. A stereoscopic video generationmethod using stereoscopic display characterization andmotion analysis[J]. IEEE Trans. Broadcast.,2008,54:188–197.
    [133] X. BAI, J. WANG, D. SIMONS, G. SAPIRO. Video snapcut: robustvideo object cutout using localized classifiers[J]. ACMSiggraph, c2009.
    [134] L. ZHANG, B. LAWRENCE, D. WANG, A. VINCENT. Comparison studyon feature matching and block matching for automatic2D-to-3D video conversion[C]. IEE Eur. Conf. Visual MediaProd., c2005.
    [135] YI-MIN TSAI, YU-LIN CHANG, LIANG-GEE CHEN. Block-basedVanishing Line and Vanishing Point Detection for3D SceneReconstruction[J].2006International symposium ofintelligent signal processing and communications,2006,1and2:541-544.
    [136] ALPER YILMAZ, XIN LI, UBARAK SHAH. Contour-Based ObjectTracking with occlusion Handling in Video Acquired singMobile Cameras[J]. IEEE Tras. Pattern Analysis and MachineIntelligence,2004,26.
    [137] C. C CHENG, C. T LI, L. G CHEN. A novel2D-to-3D conversionsystem using edge information[J]. IEEE Transactions onConsumer Electronics,2010,56:1739-1745.
    [138] C. C CHENG, C. T LI, L. G CHEN. A2D-to-3D conversion systemusing edge information[J]. Consumer and electronics, IEEE,2010:377-378.
    [139] YI-MIN TSAI, YU-LIN CHANG, LIANG-GEE CHEN. Block-basedVanishing Line and Vanishing Point Detection For3D SceneReconstruction[C].2006International Symposium onIntelligent Signal Processing and Communications, c2006,1and2:541-544.
    [140] GUANBIN. LI, HEFENG. WU. Robust Object Tracking UsingKernel-Based Weighted Fragments[C].Multimedia TechnologyInternational Conference, c2011:3643-3646.
    [141] C. FEHN. Depth-image-based rendering(DIBR), compression,and transmission for a new approach on3DTV[J]. Proc. SPIE,2004,5291.
    [142] P. HARMAN, J. FLACK, S. FOX, M. DOWLEY. Rapid2D to3Dconversion[J]. Proc. SPIE, Stereoscopic Displays andVirtual Reality Systems IX,2002,4660.
    [143] W. J. TAM, L. ZHANG.3D-TV content generation:2D-3Dconversion[C]. IEEE International Conference on Multimediaand Expo, c2006(7):1869-1872.
    [144] W. Y. CHEN, Y. L. CHANG, S. F. LIN, L. F. DING, L. G. CHEN.Efficient depth image based rendering with edge dependentdepth filter and interpolation[C]. IEEE InternationalConference on Multimedia and Expo, ICME, c2005:1314-1317.
    [145] S. KNORR, E. IMRE, B. OZKALAYCI, A. A. ALATAN, T. SIKORA.A modular scheme for2D/3D conversion of TV broadcast[J].Proc.3DPVT,2006.
    [146] http://groups.inf.ed.ac.uk/vision/CAVIAR/CAVIARDATA1/
    [147] JINGPING NI, CHAOHUI ZHENG, MINGXING ZENG, SIYUAN XU. TheApplication of Measurement Uncertainty in TestingLaboratories Management[J]. Chinese Journal of HealthLaboratory Technology,2009,19(1):197-198.
    [148] JJF1059-1999. Evaluation and Expression of Uncertainty inMeasurement.
    [149] YILIANGHU, XIUYANG ZHANG. Research on Stream Test in DigitalTV System[C]. China Digital Cable TV,c2008,12:1247-1250.
    [150] S. CHEN, BAOCHENG LIN, YIMIN ZHAN, QIANG TU. Design for TheScheme of Video Testing Technology for HDTV STB[J]. Radio&TV Broadcast Engineering,2008,1:83-87.
    [151] ROHDE SCHWARZ. Broadcast Test System Specifications.2010.
    [152] M. GUTTMANN, L. WOLF, D. COHEN-OR. Semi-automatic stereoextraction from video footage[C]. Int. Conf. Comput. Vis.(ICCV), c2009.
    [153] JCGM. Evaluation of Measurement Data: Guide to TheExpression of Uncertainty in Measurement[S].2008, pp:1-78.
    [154] X. S. CHEN, Q. SHI. Evaluation of Uncertainty in Measurementof the Luminance for Digital Television LCD Displays [J].Metrology&Measurement Technique,2010,37(4):73-74.
    [155] KIM. J, CHOE. Y, KIM. Y. G. Robust MRF-based Object Trackingand Graph-Cut-Based Contour Refinement for High Quality2Dto3D Video Conversion[C].2011Ieee Pacific Rim Conf.Communications, Computers and Signal Processing, c2011:358-363,
    [156] S. N. LI. Repeatability Problem in Measurement UncertaintyEvaluation[J]. Jiangsu Present-day Metrology,2010,3:30-31.
    [157] S. CHEN. Discussion on How to Establish The Statistical ModelPuring Estimating Measurement Uncertainty[J]. Metrology&Measurement Technique,2010,37(1):48-49.
    [158] W. P. YU, PENG BI. The Discussion on the Evaluation ofUncertainty of the Oscilloscopes Band Verification[J].Metrology&Measurement Technique,2010,37(1):78-79.
    [159] J. WU, W. ZHAO. Uncertainty Evaluation Model for SamplingBased Instruments[C].Journal of Tsinghua University:Science and Technology,2010:512-516.
    [160] DLNA FORUM. WWW.DLNA.ORG.
    [161] Y. LI, J. SUN, C.K. TANG, H.Y. SHUM. Lazy snapping[J]. ACMTrans. Graphics (TOG),2004,23:303-308.
    [162] OSGI ORGANIZATION. WWW.OSGI.ORG.
    [163] UPNP FORUM. WWW.UPNP.ORG.
    [164] HAVI STANDIZAION. WWW.HAVI.ORG.
    [165] ITU-T J.190. ARCHITECTURE OF MEDIA HOMENET THAT SUPPORTSCABLE-BASED SERVICES.
    [166] ITU-T J.191. IP FEATURE PACKAGE TO ENHANCE CABLE MODEMS.
    [167] ITU-T J.192. A RESIDENTIAL GATEWAY TO SUPPORT THE DELIVERYOF CABLE DATA SERVICES.
    [168]吴巍.物联网与数字家庭网络技术[M].电子工业出版社,2012.
    [169] C. PARK, K. JEONG, S. KIM Y. LEE. NAT ISSUES IN THE REMOTEMANAGEMENT OF HOME NETWROK DEVICES[C]. IEEE NETWORK, C2008,22(5):48-55.
    [170] S. GUHA, P. FRANCIS. CHARACTERIZATION AND MANAGEMENT OF TCPTRAVERSAL THROUGH NATS AND FIREWALS[C]. PROC. INTERNETMEANSUREMENT CONFERENCE, BERKELEY, CA, C2005.
    [171] J. ADAMS. AN INTRODUCTION TO IEEE STD802.15.4[C]. IEEEAEROSPACE CONFERENCE. C2006.
    [172] THE INTERNET ENGINEERING TASK FORCE (IETF). WWW.IETF.ORG.
    [173] X. MA, W. LUO. THE ANALYSIS OF6LOWPAN TECHNOLOGY[C]. IEEECOMPUTATIONAL INTELLIGENCE AND INDUSTRIAL APPLICATION C2008:963-966.
    [174] D. CULLER. SECURE, LOW-POWER, IP-BASED CONNECTIVITY WITHIEEE802.15.4WIRELESS NETWORKS[C]. INDUSTRIAL EMBEDDEDSYSTEMS. C2007.
    [175] OMRE. REDUCING HEALTHCARE COSTS WITH WIRELESS TECHNOLOGY[J].WEARABLE AND IMPLANTABLE BODY SENSOR NETWORKS.2009:65-70.
    [176] BRANDT, J. BURON, G. PORCU. HOME AUTOMATION ROUTINGREQUIREMENTS IN LOW POWER AND LOSSY NETWROKS[C]. IETF,C2009.
    [177] CE. PERKINS, E. M. ROYER. AD-HOC ON-DEMAND DISTANCE VECTORROUTING[J]. MOBILE COMPUTING SYSTEMS AND APPLICATIONS,1999:90-100.
    [178] MOHAJERZADEH, M. YAGHMAEE. AN ENERGY AWARE ROUTING PROTCOLFOR REAL TIME TRAFFIC IN WIRELESS SENSOR NETWORKS[C].INTERNATIONAL CONFERENCE ON ULTRA MODEM TELECOMMUNICATIONSAND WORKSHOPS. C2009:1-9.
    [179] R. MUSALOIU-E, C. J.M. LIANG, A. TERZIS. KOALA: ULTRA-LOWPOWER DATA RETRIEVAL IN WIRELESS SENSOR NETWORKS[C].INFORMATION PROCESSING IN SENSOR NETWORKS, C2008:421-432.
    [180] W. LIANG, G. MA, Y. XU, J. SHI. AGGREGATE NODE PLACEMENT INSENSOR NETWORKS[J]. COMMUNICATION SYSTEMS,2008:926-932.
    [181] K. HWANG, J. IN, N. PARK, D. EOM. A DESIGN AND IMPLEMENTATIONOF WIRELESS SENSOR GATEWAY FOR EFFICIENT QUERRYING ANDMANAGING THROUGH WORLD WIDE WEB[J]. IEEE TRANSACTIONS ONCONSUMER ELECTRONICS,2003,49(4):1090-1097.
    [182] SOFTWARE DEFINED RADIO FORUM. WWW.SDRFOMM.ORG.
    [183] GU, J. HEO, S. OH, N. PARK, G. JEON, Y. CHO. AN SDR-BASEDWIRELESS COMMUNICATION GATEWAY FOR VEHICLE NETWORKS[C].ASIA-PACIFIC SERVICES COMPUTING CONFERENCE, C2008:1617-1622.
    [184] C. TRIBBLE. THE SOFTWARE DEFINED RADIO: FACT AND FICTION[J].RADIO AND WIRELESS SYMPOSIUM,2008:5-8.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700