视频运动对象分割及码率分配与控制技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
当今社会人们对信息的需求成为信息技术发展的主要动力,作为最重要的信息形式—视频信息及其处理技术取得了长足的进步。视频信息数据量巨大,给存储和实时传输带来极大的困难,已成为妨碍数字视频技术应用的主要瓶颈,因此需要研究视频数据高效表征及其码率控制技术。对数字视频高效表征,人们进行了大量研究,先后推出了两代编码技术。以MPEG-1、MPEG-2等为代表的第一代视频编码技术,考虑去除帧内以及帧间冗余,采用块的方式进行编码。其最大缺点是没有考虑视频场景的内容构成。多媒体通信与网络综合服务的应用中,需要对信息内容进行操作和交互式控制,因此,人们提出了第二代压缩编码技术,MPEG-4是其中的代表。它先将视频场景分割成若干区域,每一个区域对应着一个语义上有意义的视频对象,然后根据各个视频对象的特征对不同的视频对象采用不同的编码方法。这种基于对象的视频编码技术不仅能大大提高编码效率,而且支持用户对视频数据按内容操作。第二代编码技术需要将视频图像分割为视频对象。这就要求研究视频图像中各种视频对象的运动、纹理、形状以及信息量等特征。按内容对视频场景进行描述和码率控制是基于对象编码和交互式操作的关键和基础,具有重要的理论意义和应用价值;而在现有标准中又没有关于视频对象自动生成和码率控制的具体规定,所以这方面是前沿研究的热点课题。
    视频图像的帧间运动是全局运动、局部运动或它们共同构成,其中前景目标在全局运动估计中被称为外点。若将外点处的局部运动矢量参与全局运动矢量估算,将影响全局运动估计的复杂度和准确度,外点区域在视频场景中所占区域较大的时候,容易发生这种情况;因此,外点的消除对于准确的全局运动估计非常重要。现有的外点消除通常用统计方法实现,也有基于光流方程的时/空域梯度比来去除外点的方法,但误差很大,效果不好。本文根据视频图像中外点有聚集成块的属性,采用亚采样、边缘特征图像块匹配的预分析方法来去除外点。该方法能去除较大面积外点区域,并可以根据预分析的结果针对不同的图像使用不同的全局运动模型,从而提高全局运动矢量估计的准确度。
    估算全局运动变化参数时,人们通常采用的方法可以划分为基于空域像素点
    
    
    灰度的方法、基于空域视频特征的方法和基于变换域的方法三大类。在这些方法中,基于空域视频特征的方法,有更好的普适性、抗噪声能力、运动估计精度和特征描述简便性等优点。本文中提出使用多个直线段空域特征进行全局运动估计的方法。在去除视频图像序列中外点区域的基础上,通过提取和比较参考图像与当前图像中的多个直线段视频特征来估计出全局运动矢量参数。该方法能够估计出全局运动的平移、旋转参数,同时算法复杂度较低和估计精度较高。
    当前一般采用邻帧差分法或光流场法进行运动检测,前者的主要缺点在于不易准确确定运动目标轮廓;后者运算复杂,极易受噪声干扰影响。上述方法在复杂背景或多运动目标的场景下,检测效果都不好。为此,本文提出一种改进的三帧双差分算法,该方法利用多个差分图像来区分不同帧中的运动目标信息,并根据差分图像灰度统计特性自适应地选择二值化门限,从而检测出运动变化区域。本文的方法有较强的自适应性、通用性和抗噪声干扰能力,能够有效地检测和分割出运动目标区域。
    全局运动补偿后的差分图像由残留噪声区域和运动变化区域组成;运动变化区域的检测,就是划分运动变化区域和残留噪声区域。从数字图像的数据比特结构出发,将图像划分为多个比特层,各比特层包含的视觉信息和噪声是各不相同的。据此,本文提出了一种各比特层预分类,然后进行与合并的技术,能明显地滤除噪声、纹理等干扰,检测出运动图像变化区域。基于比特层分类的技术还可以用于视频图像数据压缩、加密等。
    由于第二代视频编码压缩技术提出了视频对象的概念,引出了同时对多个视频对象进行编码的码率控制问题。本文在研究传统码率控制方法的基础上,根据率-失真理论,建立了视频对象间码率分配原则,并提出相应的码率控制算法,从而实现了保证信源QoS(率-失真)下,有限带宽(总码率)按视频对象的高效分配。
    上述各个研究点都进行了相应的PC仿真,并获得了好的结果,本论文所研究的理论和技术对于视频图像序列中目标检测、识别与分割技术,对于视频图像序列基于内容的数据压缩与编码码率控制有有重要的理论和实用参考价值。
In modern society the requirement on information is becoming the main factor to promote the development of information technology. On the video information and its processing technology have been made much progress. Because of the enormous data, it is quite difficult to be saved and lively transported. And also be hindered the application of digital video information. So it is urgently required to take a research on the effective representation of video data and its encoding rate control. As the first generation of video encoding, MPEG-1 and MPEG-2 are both based on the blocks in frame and prediction between frames. Although they are greatly reduced the related redundancy, but have not use the content segmentation of VO(video object). With the development of the interactive multi-media applications in the multi-media communication and integrated network service, the second generation of video encoding comes into being with its representative of MPEG-4. The video scene is divided into a lot of regions with each region corresponding to a meaningful video object (VO) on syntax; and then the different encoding techniques can be adopted to different video objects according to their features. These encoding methods can greatly improve efficiency and the user can operate the video data according to the content. To the second generation of encoding it is necessary to make more analysis on the motion, texture, shape and information quantity of different video objects in images. As we know, the automatic creating and rate control of VO is the key point of encoding based on object and interactive operation while there are no concrete specifications on them in existing standards. Thus, the research on how to create VO and the rate control of multi-VO has become a pop subject.
    The motion object region is called as out-point in the global motion estimation. Because the local motion vector of out-point is participated in forming the global motion vector, the accuracy and complexity of global motion estimation will be influenced especially when the out-point region is a big part in images. So the elimination of out-point becomes significant for accurate global motion estimation. Usually out-point is eliminated by statistical method. The pre-analysis of video image based on the ratio of temporal gradients to spatial gradients is used in some papers to eliminate out-points, but its effect is not good. In our thesis, the pre-analysis based on block match of edge characteristic image is adopted according to its characteristic that out-points tend to gather into block in image. Through this way, the fairly big region of out-points can be eliminated and different models of global motion are used for the different images. Thus the accuracy of the
    
    
    estimation of global motion vector has been improved greatly.
    There are three kinds of techniques to estimate the global motion: techniques based on pixel level, visual features in spatial domain and visual features in transformation domain. In the view of the ability of anti-noise, adaptability and the accuracy of estimation, the techniques based on spatial visual features are the best in the three. In this thesis, the technique of global motion estimation with multi straight-line features is discussed. In this way, it is easy to estimate the parameters of global displacement and rotation with good accuracy and relatively simple algorithm.
    To extract moving object in the video sequences, the adjacent frame difference and optical flow methods are adopted extensively. Its main drawback is that the outline of the motion objects is hardly to be detected precisely. In this thesis, an improved algorithm of double differences in the adjacent triple frames has been raised. The region of motion can be extracted effectively with better adaptability and strong ability of anti-noise.
    After compensation of global motion, the difference image is composed of remainder noise region and motion changing region. Based on the date structure of image, the image has been divided into bit planes. The vi
引文
[1]T.Sikora, The MPEG-4 video standard verification model. IEEE Trans. Circuits and Systems for Video Technology, vol.7, pp.19-31, Feb.1997.
    [2]T.Ebrahimi, MPEG-4 video verification model: A video encoding/decoding algorithm based on content representation, Signal Processing: Image Communication, vol.9, pp.367-384, 1997.
    [3]P.Correia, F.Pereira, The role of analysis in content-based video coding and indexing. Signal Processing, vol.66, pp.125-142, 1998.
    [4]J.K.Aggarawal & N.Nandhakumar, On the computation of motion from sequences of images, Proc. IEEE, vol.76, pp.917-935, Aug.1988.
    [5]D.J.Fleet, Measurement of Image Velocity, Norwell, MA:Kluwer, 1992
    [6]T. S. Huang, ed., Image Sequences Analysis, SpringerVerlag, 1981.
    [7]L.Jacobcon and H.Wechsler, Derivation of optical flow using a spatio-temporal frequency approach, Com. Vision Graph. Image Proc., vol.38, pp.57-61, 1987.
    [8]J.K.Paik, Y.C.Park, An edge detection approach to digital image stabilization based on tri-state adaptive linear neurous, IEEE Trans. Cons. Elec., vol.37, pp.521-530, 1991.
    [9]A.Singh, Optic Flow Computation, Los Alamitos, CA: IEEE Computer Soc.Press,1991
    [10]V.Seferidis & M.Ghanbari, General approach to block-matching motion estimation, Optical Engineering, vol.32, pp.1464-1474, July 1993.
    [11]H.H.Nagel & W.Enkelmann, An investigation of smooth constraints for the displacement vector field from image sequences, IEEE Trans. Patt. Anal. Mach. Intel., vol.8, pp.565-593,1986.
    [12]B.D.Lucas, An iterative image registration technique with an application to stereo vision, Proc. DARPA Image Understanding Workshop, pp.121-130, 1997.
    [13]B.K.P.Horn & B.G.Schunck, Determining optical flow, Artif.Intel.,vol.17, pp.185-203, 1987
    [14]D. R. Walker & K. R. Rao, Improved per-recursive motion compensation, IEEE Trans. Commum., vol.COM-32, pp.1128-1134, Oct.1984
    [15]Y.Nakaya & H.Harashioma, Motion compensation based in spatial transformations, IEEE Trans. CAS Video Tech., vol.4, pp.339-356, June
    
    
    1994.
    [16]L.Jacobon & H.Wechsler, Derivation of optical flow using a spatial-temporal frequency approach, Comp. Vision Graph. Image Proc., vol.38, pp.57-61, 1997
    [17]S. Chaudhury, S. Subramanian and G. Parthasarathy, Heuristic search approach to shape matching in image sequences, IEE Proc. –E, vol.138, no.2, pp. 97-105, 1991
    [18]L. S. Davis, Z. Wu and H. Sun, Contour-based motion estimation, Comput. Vis. Graphics Image Proc., vol.23, pp.313-326, 1983.
    [19]Yan Yao-ping & Wang Yang-li , Motion Estimation and Compensation Based on Feature Points, Journal of XIDIAN University, Vol.25, No.2, pp.142-145, Apr.1998
    [20]A.Neri, S.Colonnese, G.Russo & P.Talone, Automatic moving object and background separation, Signal Processing, vol.66, no.2, pp.219-232, 1998.
    [21]G.Adiv, Determining three-dimensional motion and structure from optical flow generated by several moving objects, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.7, pp.384-401, July.1985.
    [22]D.W.Murray and B.F.Buxton, Scene segmentation from visual motion using global optimization, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.9, pp.220-228, Mar.1987.
    [23]S.M.Smith & J.M.Brady, ASSET-2: Real-time motion segmentation and shape tracking, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.17, pp.814-820, Aug.1995.
    [24]M.M.Chang, A.M.Teklap & M.I.Sezan, Motion-field segmentation using an adaptive MAP criterion, IEEE Int. Conf. Acoustic, Speech, Signal Processing, ICASSP’93, vol.V, pp.33-36,1993.
    [25]J.Y.A.Wang & E.H.Adelson, Representing moving images with layers, IEEE Trans. Image Processing Special Issue: Image Sequence Compression, vol.3, pp.625-638, Sep.1994.
    [26]M.Kim, J.G.Choi, D.Kim, H.Lee, M.H.Lee, C.Ahn & Y.S.Ho, A VOP generation tool: automatic segmentation of moving objects in image sequences based on spatial-temporal information. IEEE Trans. Circuits and Systems for Video Technology, vol.9, pp.1216-1226, Dec.1999.
    
    [27]J.G.Choi, S.W.Lee and S.D.Kim, Spatio-temporal video segmentation using a joint similarity measure, IEEE Trans. Circuits and Systems for Video Technology, vol.7, pp.279-286, Apr, 1997.
    [28]F.Moscheni, S.Bhattacharjee and M.Kunt, Sptio-temporal segmentation based on region merging, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.20, pp.897-915, Sep.1998.
    [29]D.Wang, C.Labit, Morphological sptio-temporal simplification for video image segmentation, Signal Processing: Image Communication, vol.11, pp.161-170, 1997.
    [30]S.Herrmann, H.Mooshofer, H.Dietrich & W.Stechele, A video segmentation algorithm for hierarchical object representation and its implementation, IEEE Trans. Circuits and Systems for Video Technology, vol.9, pp.1204-1215, Dec.1999.
    [32]L.Patras, E.A.Hendriks and R.L.Lagendijk, Video segmentation by MAP labeling of watershed segments, IEEE Trans. Pattern Analysis and Machine Intelligence, vol.23, pp.326-332, 2001.
    [33]I.Kompatsiaris & M.G.Strintzis, Spatio-temporal segmentation and tracking of objects for visualization of videoconference image sequences, IEEE Trans. Circuits and Systems for Video Technology, vol.10, pp.1388-1402, Dec.2000.
    [34]T.Meier & K.N.Ngan, Automatic segmentation of moving objects for video object plane generation, IEEE Trans. Circuits and Systems for Video Technology, vol.8, pp.525-538, Sep.1998.
    [35]T.Meier & K.N.Ngan, Video segmentation for content-based coding, IEEE Trans. Circuits and Systems for Video Technology, vol.9, pp.1190-1203, Dec.1999.
    [36]D.G.Sim, O.K.Kwon & R.H.Park, Object matching algorithm using robust Hausdorff distance measure, IEEE Trans. Image Processing, vol.8, pp.425-429, Mar.1999.
    [37]A.M.Tekalp, “Digital Video Processing”, Prentice Hall, 1995.
    [38]Test Model Editing Committee, MPEG-2 Video Test Model5. ISO/ IECJTC1/ SC29/ WG11MPEG93/ 457, 1993
    [39] M.Banic, M.Lucenteforte & E.Marcozzi, MPEG-2 Optimized Video coder, Proceedings of the International Picture Coding Symposium, California, 3-04,pp.79-824, 1994
    
    [40]孙军、张文军、余松煜, MPEG-2视频编码的自适应量化设计, 通信学报, 16(5), pp.102~1065, 1995
    [41]E.D.Frimout, J.Biemond, R.L.Longendijk, Forward rate control for MPEG recording, Proceedings of SPIE, Boston, 2094(1), pp.184-196, 1993
    [42]Seungkwon Pack, Jungsak Kang, Yangsuk Seo, Rate control strategy based on Human visual sensitivity for MPEG video coder, Proceedings of SPIE, Chicago, 2308(1), pp.322-330, 1994
    [43]Yong Kwan Kim & Zhihai He, A novel linear source model and a unified rate control algorithm for H.263/MPEG-2/MPEG-4, IEEE Trans. On Circuits and Systems for Video Tech., vol.9, pp.1777-1780, 2001
    [44]Anthony Vetro, Huifang Sun & Yao Wang, MPEG-4 Rate Control for Multiple Video Objects, IEEE Trans. ON Circuits and Systems for Video Tech., vol. 9, NO.1, pp.186-199, Feb. 1999
    [45]A.Vetro and H.Sun, Joint rate control for coding multiple video objects, Proc. IEEE Workshop Multimedia Signal Processing, Princeton, NJ., June 1997, pp. 181–186.
    [46]T.Weigand, M.Lightstone, D.Mukherjee, T.G.Campbell & S.K.Mitra, Rate-distortion optimized mode selection for very low bit-rate video coding and the emerging H.263 standard, IEEE Trans. Circuits Syst. Video Tech., vol. 6, pp. 182–190, Apr. 1996.
    [47]T.Chiang and Y.Q.Zhang, A new rate control scheme using quadratic rate-distortion modeling, IEEE Trans. Circuits Syst. Video Technol., pp.697-705 Feb. 1997.
    [48]J.Lee and B.W.Dickenson, Rate-distortion optimized frame type selection for MPEG encoding, IEEE Trans. Circuits Syst. Video Technol., vol. 7, pp. 501–510, June 1997.
    [49] A.M.Tekalp, “Digital Video Processing,” Prentice Hall, 1995.
    [50] B.K.P.Horn, “Robot Vision,” Cambridge, MA:MIT Press,1986.
    [51]A.Verri & T.Poggio, Motion field and optical flows: Qualitative properties, IEEE Trans, Patt. Anal. Mach. Intel., vol.PAMI-11, pp.490-498, May 1989.
    [52]Carlos M, Rama C, Fast electric digital image stabilization, Proc. International
    
    
    Conference on Pattern Recognition. Vienna, Austria, 1996. pp.284-288.
    [53]M.Bertero, T.A.Poggio & V.Torre, Ill-posed problems in early vision, Proc. IEEE, vol. 76, pp.869-889, August 1988.
    [54]N.Ohta, Optical flow detection by color images, NEC Res. & Dev., no.97, pp.78-84, Apr. 1990.
    [55]B.K.P.Horn & B.G.Schunck, Determining optical flow, Artif. Intel., vol. 17, pp.185-203, 1981.
    [56]J.S.Lim, “Two-Dimensional Signal and Image Processing,” Englewood Cliffs, NJ. Prentice Hall,1990.
    [57]G.Wolberg, “Digital Image Warping,” Los Alamitos, CA:IEEE Comp. Soc. Press,1990.
    [58]Y.Nakaya & H.Harashima, Motion compensation based on spatial transformations, IEEE Trans. CAS Video Tech., vol.4, pp.339-356, June 1994.
    [59]H.Gharavi & M.Mills, Block-matching motion estimation algorithms: New results, IEEE Trans. Circ. & Syst., vol.37, pp.649-651,1990.
    [60]Konrad Janusz & Dufaux Frederic, Digital Equipment Corporation, Improved Global Motion Estimation for N3, MeetingofISO/IEC/SC29/WG11, No.MPEG97/M3096, SanJose, 1998.
    [61]M.J.Black, The robust estimation of multiple motions: parametric and piecewise-smooth flow fields, Computer Vision and Image Understanding, vol.63, no.1, pp.75-104,January 1996.
    [62]Q.Wei & H.J.Zhang, A per-analysis for Robust Global Motion Estimation, IEEE 0-7803-5467-2/99 pp.625-628, 1999.
    [63] Zhang Tong & Carlo Tomasi, Fast, robust, and consistent camera motion estimation, Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.164-170, 1999.
    [64]贺玉文、杨士强、钟玉琢, 全局运动估计中特征点选取和鲁棒性分析, 计算机学报, Vol.24, No.3, pp.236-241, Mar.2001
    [65]M.Bierling, Displacement estimation by hierarchical block-matching, Proc. Visual Comm. & image Proc., SPIE vol.1001, pp.924-951,1988.
    [66]谢亚滨、张万清、张天序, 一种基于边缘的相位相关景象匹配算法, 飞航导弹,1996年5期, pp.59-62.
    [67]傅德胜、寿益禾,《图形图像处理学》,东南大学出版社,2002年2月,pp.188
    
    [68]T.S.Huang, ed., “Image Sequences Analysis,” SpringerVerlag, 1981.
    [69]S.J.Ko, S.H.Lee, S.W.Jeon, Fast digital image stabilizer based on grag-coded bit-plane matching, IEEE Trans. Consumer Electronics, pp.598-603, 1999.
    [70]J.K.Paik, Y.C.Park, An edge detection approach to digital image stabilization based on tri-state adaptive linear neurous, IEEE Trans. Cons. Elec., vol.37, pp.521-530, 1991.
    [71]L.Jacobcon & H.Wechsler, Derivation of optical flow using a spatio-temporal frequency approach, Com. Vision Graph. Image Proc., vol.38, pp.57-61, 1987.
    [72]Y.Nakaya, H.Harashima, Motion Compensation Based on Spatial Transformation, IEEE Trans. On Circuits and Systems CASVT. 4(3) pp.339~356, 1994.
    [73]K.Hoff, T.Culver, J.Keyser, M.Lin & D.Manocha. Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware, Technical Report TR99-011, Dept. of Comp. Sci., University of North Carolina at Chapel Hill,1999.
    [74]Mech R, Wollborn M. A noise robust method for 2D shape estimation of moving objects in sequences considering a moving camera, Signal Processing, 1998, 66(2), pp.203-217
    [75]M.M.Chang, A.M.Tekalp, “Motion field segmentation using an adaptive MAP criterion,” Proc. Int. Conf. ASSP, Adelaide, Australia, April 94
    [76]G.Adiv, Determining three-dimensional motion and structure from optical flow generated by several moving objects, IEEE Trans. Pattern Anal. Mach. Intel., vol.7, pp.384-401,1994.
    [77]李在铭,张全芬,李晓峰,《随机信号分析及工程应用》,电子科大出版社,1990
    [78]王梓坤,《概率论基础及其应用》,科学出版社,1976
    [79]R.G.Brown, “Introduction to Random Signal analysis and Kalman Filtering,” NY: Wiley, 1983.
    [80]O.Faugeras, “Three-Dimensional Computer Vision,” Cambirdge, MA: MIT Press, 1993.
    
    [81]Gian luca Toresti,Carlo S. Regazzoni, A change-detection method for multiple object localization in real scenes, International Conf. Of IEEE Industry Electronics – IECON 94, pp.984-987, 1994
    [82]J.R.Bergen & S.Peleg, A three-frame algorithm for estimation two-component image motion, IEEE Trans. Patt. Anal. Mach. Intel., vol.14, pp.886-896, Sep.1992
    [83]Hsu Y Z, Nagel H-H, Rekers G, New likelihood test methods for change detection in image sequences, Computer Vision, Graphics and Image Processing, 26(1), pp.73-106, 1984
    [84]郭奕康,董文成译,《图像与噪声》,日本电视学会编,人民邮电出版社,1982
    [85]朱辉,面向对象生成的视频分割技术研究,电子科大博士论文,2002
    [86]M.K.Leung, Yang Yee-hong, “Human body motion segmentation in a complex scene,” Pattern Recognition, 20(1),pp.55-64, 1987.
    [87]杨光正,吴岷,张晓莉,《模式识别》,中国科技大学出版社,2001
    [88]Douglas R. Stinson, “Cryptography: Theory and Practice”, CRC Press, Boca Raton, Florida, 1995.
    [89]Y.Nakamura & K.Matsui, A Unified Coding of Image and Text Data Using Discrete Orthogonal Transform, IEICE D-II, Vol.J72, No.3, pp.363–368, 1989.
    [90]E.Kawaguchi, T.Endo & J.Matsunaga, Depth First picture expression viewed from digital picture processing, IEEE Trans. on PAMI, vol.PAMI-5, no.4, pp.373–384, 1988.
    [91]S.Kamata, R.O.Eason & E.Kawaguchi, Depth-First Coding for multi-valued pictures using bit-plane decomposition, IEEE Trans. on Comm., vo.43, no.5, pp.1961-1969, 1995.
    [92]Michiharu Niimi, Hideki Noda & Eiji Kawaguch, An image embedding in image by a complexity based region segmentation method, ICIP97, pp.93-99
    [93]Eiji Kawaguch, Modeling Digital Image into Informative and Noise-Like Regions by Complexity Measure, Technology report, Kyushu Institute of Technology, Japan, 1999, pp.235-244
    [94]S.J.Ko, S.H.Lee & K.H.Lee, Digital Image Stabilizing Algorithms Based on Bit- Plane Matching, IEEE Trans. on Consumer Electronics, Vol. 44, No. 3, pp.617-622, 1998.
    [95]H.R.Pourreza, M.Rahmati & F.Behazin, Weighted Multiple bit-plane
    
    
    Matching, A Simple and Efficient Matching Criterion for Electronic Digital Image Stabilizer Application, Proc. Of Workshop on Real-Time Image Sequence Analysis, pp. 33-42, 2000.
    [96]王楠等,基于位平面的运动矢量估计, 指挥技术学院学报,Vol.12 No.3,pp.29-32, June 2001
    [97]V.Chan,S.Y.Kung, Multi-Level Pixel Difference Classification Methods, Proc. Of IEEE Int. Conf. On Image Processing, pp. 252- 255, 1995.
    [98]N.Sebe, M.S.Lew & D.P.Huijsmans, Toward Improved Ranking Metrics, IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol. 22, No. 10, pp. 1132-1142, 2000.
    [99]张文涛,视频多目标分割、宏特征描述与状态检测技术,电子科技大学博士论文,2000
    [100]E.Berruto, at al., Architectural Aspects for the Evolution of Mobile Communications Toward UMTS, IEEE Journal on Selected Areas in Communications, Vol. 15, October 1997.
    [101]L.Sieh, P.Richardson, A.Ganz, Quality of Service Support for Multimedia Applications in Third Generation Mobile Networks using Adaptive Scheduling, Teletronikk, pp. 1-27, August 1996.
    [102]Request for Comments: 2429 - RTP Payload Format for the 1998 Version of ITU-T Rec. H.263;Video (H.263+), C. Bormann/ Univ. Bremen, L. Cline/Intel, G. Deisher/Intel, T. Gardos/Intel, C. Maciocco/Intel, D. Newell/Intel, J. Ott/Univ. Bremen, G. ullivan/PictureTel, S. Wenger/TU ;Berlin, C. Zhu/Intel, October 1998
    [103]Request for Comments: 3016 - RTP Payload Format for MPEG-4 Audio/Visual Streams, Y. Kikuchi, Y. Kikuchi/Toshiba, T. Nomura/NEC, S. Fukunaga/Oki, Y. Matsui/Matsushita, H. Kimata/NTT, November 2000.
    [104]H.M.Hang & J.J Chen, Source model for transform video coder and its application—Part I: Fundamental theory, IEEE Trans. Circuits Syst. Video Technol., vol. 7, pp. 287–298, Apr. 1997.
    [105]俞斯乐,王建松, MPEG-2量化器的区域自适应率失真优化, 通信学报,vol.22, no.6, pp.97-101,June 2001.
    [106]K.Seo, S.Heo, J.K.Kim, A rate control algorithm based on adaptive R-Q model for MPEG-1 to MPEG-4 transcoding in DCT domain, IEEE 0-7803-7400-2/02, pp.109-113, 2002.
    
    [107]朱雪龙, 《应用信息论基础》,清华大学出版社,1993.
    [108]L.J.Lin & A.Ortega, Bit-rate control using piecewise approximated rate-distort characteristics, IEEE Trans. Circuits Syst. Video Tech., vol.8, no.4, pp.446-459, Aug.1998
    [109] Jos′e I.Ronda, Martina Eckert, Fernando Jaureguizar, & Narciso Garc′ya, Rate Control and Bit Allocation for MPEG-4, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 8, DECEMBER 1999
    [110]熊成玉,何芸, 基于率失真理论的视频编码码率控制, 通信学报, 18(5), pp.8~14, 1997.
    [111]陈纲,徐国治,基于率失真理论的联合码率控制方法, 上海交通大学学报, vol.33 , no.3, pp.563-565, May 1999.
    [112]H.J.Lee, T.Chiang, & Y.Q.Zhang, Multiple-VO rate control and B-VO rate control, ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Associated Audio MPEG 97/M2554, Stockholm, Sweden, July 1997.
    [113]ISO/IECMPEG96/XXXX, Coding of moving pictures and associated audio information (MPEG-4 video verification model version 5.1), Dec,1996

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700