基于可视媒体融合的二维数字角色生成研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
通过图像、视频、3D模型和运动捕获数据等可视媒体进行数字角色生成的研究已成为热点。数字角色由纹理、形状和行为三方面要素构成,并来源于图像、视频、3D模型和运动捕获数据等可视媒体。因此,对可视媒体的融合处理,充分发挥各种可视媒体的优势,提高数字角色生成的效率和质量,成为结合图形学、计算机视觉、图像/视频信号处理等领域的一个新兴研究方向。
     本文提出的可视媒体融合框架,将图像、视频、运动捕获数据和三维模型这四类能被人类视觉所感知,能通过相应的硬件设备制作获取,能被编辑、压缩/解压及存储的媒体定义为可视媒体,将计算机数字角色动画中出现的各类研究从可视媒体融合的角度系统分类。为了提高二维数字角色的制作效率及质量,引入可视媒体融合理论,提出了基于可视媒体融合的二维数字角色生成新思路。
     提出具有鲁棒性及高抗噪性的多风格二维数字角色提取算法。基于二维数字角色图像中的灰度信息,计算图像中每帧的梯度并检测出角色的轮廓,对轮廓采用像素填充算法获得侧影并提取。其中轮廓提取是逐一计算每个像素点在灰度上的二阶导数,并通过高斯.拉普拉斯算子(Laplacian of Gaussian,简称LOG)近似得到,当某一点的二阶导数为零时,则该点被定为边界的一部份。边界图像的噪音,采用卷积算法去除。
     提出基于低维流形空间的自适应二维数字角色融合算法。采用全局流形降维技术(ISOMap),依据距离矩阵确定当前结点与周围邻居结点之间的距离为最短路径,由此构建无向图;基于无向图计算任意两点之间的最短路径,得到最短路径图;采用多元尺度分析(Multidimensional scaling,简称MDS)算法构建低维空间。基于该低维内嵌空间,由用户在空间中任意指定两点作为起点和终点,基于两点之间最短路径所包含的卡通帧合成二维数字角色动画
     提出基于扩散激发图模型(Constrained Spreading Activation Network,简称CSAN)的二维数字角色融合算法。分析不同特征(包括欧式边界距离,霍氏边界距离,运动距离,颜色直方图距离及分块子区域颜色距离)对二维数字角色动画平滑度的影响,引入模糊关联度计算特征权重;精确计算任意两帧之间的相似度并构建扩散激发图模型;基于该模型,依据用户的轨迹输入,生成二维数字角色动画
     提出基于背景计算的交互式二维数字角色自动生成框架。基于二维数字角色库,构建扩散激发图模型;由用户在二维场景图像中任意绘制轨迹并自动生成新的二维数字角色动画;通过灭线计算得出二维数字角色在场景中的比例透视关系,将场景与动画相融合。该方法使用户直接面对场景绘制轨迹,并控制二维数字角色在场景中的运动,其灵活性及易用性适合用户的操作习惯。
In recent years, digital roles have been widely used in various applications such as virtual reality, video games, animation films and sport simulations. The generation of digital roles, which utilizes several visual media, including image, video, 3D model and motion capture data, is a cross research field of visual media fusion. In the computer animation world, the role is composed of the texture, shape and behavior, which are generally from these visual media. Dealing with the visual media fusion is becoming an emergent research topic in the fields of computer graphics, computer vision and image/signal processing.
     This paper defines the image, video, motion capture data and 3D model to be the visual media, which can be perceived by the human perception and be able to be edited and stored. Based on the definition, we propose a visual media fusion framework, which can be used to classify varied kinds of computer animation from the visual media fusion's point of view.
     This paper proposes a robust and anti-noise segmentation algorithm based on Laplacian of Gaussian edge detection to extract the data from the image precisely. In this algorithm, the standard deviationαis coordinated to extract the character's edge, and a convolution process is carried out to remove the noise and extract the complete edge for the characters. We utilize the fulfilling algorithm to extract the whole character.
     This paper proposes an adaptive 2D digital role fusion algorithm in the lower-dimensional space. It adopts the Isomap as a lower-dimensional reduction method. According to the adjacency matrix, an undirected graph can be built. Based on this undirected graph, a shortest path graph can be constructed, and then by using the Multidimensional scaling, a lower-dimensional embedding space is built. Based on this space, the cartoonists can specify two points and construct the shortest path between them. Those points along the path can generate the new 2D digital role animation.
     This paper proposes a controllable 2D digital role fusion algorithm based on constrained spreading activation network. Initially, the features' effects on similarity including Euclidean edge distance, Hausdorff edge distance, motion distance, Color histogram distance and Sub-region color distance are analyzed and a fuzzy approach is adopted to assign the weights for each feature. After calculating the similarity precisely, the constrained spreading activation network is constructed.
     This paper proposes an automatic 2D digital role generation framework based on the constrained spreading activation network. The new animation can be generated according to the path drawn in the background image. After calculating the vanishing line in the background, the new generated animation can be applied into the background according to the scaling factor. This framework is convenient and flexible for the users.
引文
[1]Jed Lengyel.The Convergence of Graphics and Vision.Computer,1998,31(7):46-53.
    [2]P.Debevec.Rendering Synthetic Objects Into Real Scenes:Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography.Computer Graphics Proceedings,Annual Conference Series,ACM SIGGRAPH,1998:189-198.
    [3]J.Dinerstein,P.K.Egbert,D.Cline.Enhancing computer graphics through machine learning:a survey.The Visual Computer,2007,23(1):25-43.
    [4]H.Unten,K.Ikeuchi.Virtual Reality Model of Koumokuten Generated from Measurement.Proceedings of the International Conference on Virtual Systems and Multimedia,2004:209-215.
    [5]B.Allen,B.Curless,Z.Popovic.The Space of Human Body Shapes:Reconstruction and Parameterization from Range Scans.In ACM SIGGRAPH,2003:587-594.
    [6]John Lasseter.Tricks to animating characters with a computer.In ACM SIGGRAPH,2001:45-47.
    [7]A.Hilton,D.Beresford,T.Gentils,et al.Virtual People:Capturing human models to populate virtual worlds.In Proceedings of IEEE International Conference on Computer Animation,1999:174-185.
    [8]Michael Gleicher,Hyun Joon Shin,Lucas Kovar,Andrew Jepsen.Snap together motion:Assembling run-time animation.Symposium on Interactive 3D Graphics,2003:181-188.
    [9]Daniel Thalmann.Motion Control:From Keyframe to Task-Level Animation.State-of-the-Art in Computer Animation,1985:3-17.
    [10]金小刚.计算机动画基础算法研究.浙江大学博士学位论文,1995.10.
    [11]Guenter,B.,Parent,R.Computing the arc length of parametric curves.IEEE Computer Graphics & Applications,1990,10(3):72-78.
    [12]Watt,A.,Watt,M.Advanced animation and rendering techniques.Addison-Wesley Publishing Company,1992.
    [13]Brotman,L.S.,Netravali,A.N.Motion interpolation by optimal control.Computer Graphics,1988,22(4):179-188.
    [14]Steketee,S.N.,Badler,N.I.Parametric keyframe interpolation incorporating kinetic adjustment and phrasing control.Computer Graphics,1985,19(3):255-262.
    [15]Kochanek,D.U.,Bartels,R.H.Interpolating splines with local tension,continuity and bias control.Computer Graphics,1984,18(3):245-254.
    [16]Shoemake,K.Animating rotation with quaternion curves.Computer Graphics,1985,19(3):245-254.
    [17]Duff, T. Splines in animation and modeling. SIGGRAPH'86 Course#15: State of the art in image synthesis.
    
    [18]Pletincks, D. Quaternion calculus as a basic tool in computer graphics. The Visual Computer, 1989, 5(1): 2-13.
    
    [19]Barr, A.H., Currin, B., Gabriel, S., Hughes, J.F. Smooth interpolation of orientations with angular velocity constraints using quaternions. Computer Graphics, 1992, 26(2): 313-320.
    
    [20]Denavit J., Hartenberg R. S. A kinematic notation for lower pair mechanisms based on matrices. ASME J Applied Mechanics, 1955, 22(2): 215—221.
    
    [21]Korein, J.U., Badler, N.I. Techniques for generating the goal-directed motion of articulated structures. IEEE Computer Graphics & Applications, 1982, 2(3):71-81.
    
    [22]Girard, M., Maciejewski, A.A. Computational modeling for the computer animation of legged figures. Computer Graphics, 1985, 19(3): 263-270.
    
    [23]Girard, M. Interactive design of 3D computer-animated legged animal motion. IEEE Computer Graphics & Applications, 1987, 7(6): 39-51.
    [24]朱登明,王兆其,石敏.一种基于逆运动学的三维虚拟人体运动生成方法.计算机研究与发展, 2002, 39: 33-39.
    
    [25]David J.Sturman. A brief history of motion capture for computer character animation. Character motion system, SIGGRAPH 94: Course 9.
    
    [26] T. W. Calvert, J. Chapman, A. Patla. Aspects of the kinematic simulation of human movement. IEEE Computer Graphics and Applications, 1982, 2(9): 41-50.
    
    [27] C. M. Ginsberg, D. Maxwell. Graphical marionette. In Proceedings of ACM SIGGRAPH/SIGART Workshop on Motion, 1983: 172-179.
    
    [28]B. Robertson. Mike-the talking head. Computer Graphics World, 1988: 15-17.
    
    [29] Graham Walters. The story of Waldo C. Graphic. Course Notes: 3D Character Animation by Computer, ACM SIGGRAPH '89, 1989: 65-79.
    
    [30] Graham Walters. Performance animation at PDI. Course Notes: Character Motion Systems, ACM SIGGRAPH 93, 1993: 40-53.
    
    [31] Jeff Kleiser, Character motion systems. Course Notes: Character Motion Systems,ACM SIGGRAPH 93, 1993: 33-36.
    
    [32]Herve Tardif. Character animation in real time. Panel: Applications of Virtual Reality I: Reports from the Field, ACM SIGGRAPH Panel Proceedings, 1991.
    
    [33]Barbara Robertson. Moving pictures. Computer Graphics World, 1992, 15(10):38-44.
    
    [34]Herve Tardif. Character animation in real time. Panel: Applications of Virtual Reality I: Reports from the Field, ACM SIGGRAPH Panel Proceedings, 1991.
    
    [35] http://www.animazoo.com/products/gypsy5.htm.
    
    [36]Dyer, S., Martin, J., Zulauf, J.. Motion Capture White Paper. Technical Report. Silicon Graphics,December 12,1995.
    [37]http://www.motionanalysis.com.
    [38]http://www.vicon.com.
    [39]http://www.disontech.com.cn/products/DMC.htm.
    [40]http://www.dorealsoft.com/product/?category=17.
    [41]G..Johansson.Visual motion perception.Sci.American,1975,232(6):76-88.
    [42]R.F.Rashid.Towards a system for the interpretation of moving light display.IEEE Trans.On PAMI,1980,2(6):574-581.
    [43]J.A.Webb,J.K.Aggarwal.Visually interpreting the motion of objects in space.IEEE Computer,1981,14(8):40-46.
    [44]J.A.Webb,J.K.Aggarwal.Structure from motion of rigid and jointed objects.In Artificial Intelligence,1982,19:107-130.
    [45]Moeslund T B,Granum E.A survey of computer vision-based human motion capture.Computer Vision and Image Understanding,2001,81(3):231-268.
    [46]Thomas B.Moeslund,Adrian Hilton,Volker Kruger.A survey of advances in vision-based human motion capture and analysis.Computer Vision and Image Understanding,2006,104:90-126.
    [47]M.Gleicher,N.Ferrier.Evaluating video-based motion capture.In Proceedings of Computer Animation,2002:75.
    [48]J.Segen,S.Pingali.A camera-based system for tracking people in real time.In Proc.Of Intl.Conf.On Pattern Recognition,1996:63-67.
    [49]Christoph Bregler,Jitendra Malik.Video Motion Capture,Computer Graphics,1997.
    [50]劳志强.基于形象思维的智能动画技术的研究.浙江大学博士论文,1997年.
    [51]刘小明.基于视频的人体动画技术的研究.浙江大学硕士学位论文,1999年.
    [52]朱强.视频流中人体运动信息的提取和再生技术.浙江大学硕士学位论文,2002年.
    [53]罗忠祥.视频流中人体运动提取与运动合成.浙江大学博士学位论文,2002年.
    [54]Jianhui Zhao,Ling Li.Human motion reconstruction from monocular images using genetic algorithms.Computer Animation and Virtual Worlds,2004,15(3-4):407-414.
    [55]Yisheng Chen,Jinho Lee,Rick Parent,Raghu Machiraju.Markerless Monocular Motion Capture Using Image Features and Physical Constraints.Computer Graphics International,2005:36-43.
    [56]Antonio S.Micilotta,Eng Jon Ong and Richard Bowden.Real-time Upper Body 3D Pose Estimation from a Single Uncalibrated Camera.Eurographics, 2005:41-44.
    
    [57] D.E. DiFranco, T. Cham, J.M. Rehg. Reconstruction of 3-D Figure Motion from 2-D Correspondences. In Proc. Conf. Computer Vision and Pattern Recognition,2001:307-314.
    
    [58] Cheng Chen, Yueting Zhuang, Jun Xiao. Towards Robust 3D Reconstruction of Human Motion from Monocular Video. In ICAT, 2006: 594-603.
    
    [59] Ankur Agarwal, Bill Triggs. Recovering 3D Human Pose from Monocular Images. In IEEE Transactions on Pattern Analysis and Machine Intelligence,2006, 28(1): 44-58.
    
    [60] Ahmed Elgammal, Chan-Su Lee. Inferring 3D Body Pose from Silhouettes Using Activity Manifold Learning. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004: 681-688.
    
    [61] V. Athitsos, S. Sclaroff. Estimating 3D Hand Pose from a Cluttered Image. In Proc. Int'l Conf. Computer Vision, 2003, 18-20.
    
    [62] G Shakhnarovich, P. Viola, T. Darrell. Fast Pose Estimation with Parameter Sensitive Hashing. In Proc. Int'l Conf. Computer Vision, 2003: 750.
    
    [63] M. Cheung, T. Kanade, J.-Y. Bouguet, and M. Holler. A real time system for robust 3D voxel reconstruction of human motions. In CVPR, 2000: 714-720.
    
    [64] K.M. Cheung, S. Baker, T. Kanade. Shape-From-Silhouette of Articulated Objects and its Use for Human Body Kinematics Estimation and Motion Capture. IEEE Conference on Computer Vision and Pattern Recognition, 2003:18-20.
    
    [65] K.M. Cheung. Visual Hull Construction. Alignment and Refinement for Human Kinematic Modeling, Motion Tracking and Rendering. Doctoral dissertation,tech. report CMU-RI-TR-03-44, Robotics Institute, Carnegie Mellon University,October, 2003.
    
    [66] K.M. Cheung, S. Baker, J.K. Hodgins, and T. Kanade. Markerless Human Motion Transfer. International Symposium on 3D Data Processing,Visualization and Transmission, 2004: 6-9.
    
    [67] K.M. Cheung, S. Baker, T. Kanade. Shape-From-Silhouette across Time Part I:Theory and Algorithms. International Journal of Computer Vision, 2005, 62(3):221-247.
    
    [68] K.M. Cheung, S. Baker, T. Kanade. Shape-From-Silhouette across Time: Part II:Applications to Human Modeling and Markerless Motion Tracking,International Journal of Computer Vision, 2005, 63(3): 225-245.
    
    [69] http://www.vicon.com/products/peakmotus.html.
    
    [70] Pighin, F., Hecker, J., Lischinski, D., Szeliski, R., Salesin, D.H. Synthesizing realistic facial expressions from photographs. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 1998:75-84.
    
    [71] K. Waters, T. M. Levergood. Decface: An Automatic Lip-Synchronization Algorithm for Synthetic Faces, DEC. Cambridge Research Laboratory Technical Report Series, 1993.
    
    [72] K. Arai, T. Kurihara, K. Anjyo. Bilinear Interpolation for Facial Expression and Metamorphosis in Real-Time Animation. The Visual Computer, 1996, 12:105-116.
    
    [73] Parke, F.I. A Parametric Model for Human Faces. PhD Thesis, University of Utah, Salt Lake City, UTEC-CSc-75-047, USA, 1974.
    
    [74] H. Sera, S. Morishma, D. Terzopoulos. Physics-based Muscle Model for Moth Shape Control. IEEE International Workshop on Robot and Human Communication, 1996: 207-212.
    
    [75] M. Cohen, D. Massara. Modeling co-articulation in synthetic visual speech. In Model and Technique in Computer Animation, 1993: 139-156.
    
    [76] F.I. Parke. Parameterized models for facial animation revisited. In ACM Siggraph Facial Animation Tutorial Notes, 1989: 53-56.
    
    [77] F.I. Parke. Parameterized models for facial animation. IEEE Computer Graphics and Applications, 1982, 2(9): 61-68.
    
    [78] P. Ekman, W. V. Friesen. Facial Action Coding System. Consulting Psychologists Press, Palo Alto, CA, 1978.
    
    [79] Abrantes, GA., Pereira, F.: MPEG-4 facial animation technology: survey,implementation, and results. IEEE Trans. Circuits Syst. Video Technol. 1999,9(2): 290-305.
    
    [80] Waters, K. A muscle model for animating three-dimensional facial expression. Comput. Graph, 1987, 22(4): 17-24.
    
    [81] Yuencheng Lee, Demetri Terzopoulos, Keith Walters. Realistic Modeling for Facial Animation. Computer Graphics, Annual Conference Series, SIGGRAPH,1995.29:55-62.
    
    [82] Sifakis, E., Neverov, I., Fedkiw, R. Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Trans. Graph. 2005,24(3): 417-425.
    
    [83] Terzopoulos, D., Waters, K. Physically-based facial modeling, analysis, and animation. Vis. Comput. Animation 1990, 1: 73-80.
    
    [84] John P. Lewis, Jonathan Mooser, Zhigang Deng, Ulrich Neumann. Reducing blendshape interference by selected motion attenuation. SI3D, 2005: 25-29.
    
    [85] Pushkar Joshi, Wen C. Tien, Mathieu Desbrun, Frederic H. Pighin. Learning controls for blend shape based realistic facial animation. Proceedings of Eurographics, 2003: 187-192.
    
    [86] L. Williams. Performance-Driven Facial Animation. SIGGRAPH Proceedings,1990:235-242.
    
    [87] S.R. Marschner, B. Guenter, S. Raghupathy. Modeling and Rendering for Realistic Facial Animation. The Eleventh Eurographics Rendering Workshop,June, 2000:231-242.
    [88] I. Essa, S. Basu, T. Darrell, A. Pentland. Modeling, Tracking and Interactive Animation of Faces and Heads using Input from Video. Computer Animation Conference, 1996:68.
    
    [89] B. Guenter, C. Grimm, D. Wood, H. Malvar, F. Pighin. Making Faces. In ACM SIGGRAPH, 1998: 55-66.
    
    [90] Zhang, L., Snavely, N., Curless, B., Seitz, S. M. Spacetime faces: high resolution capture for modeling and animation. In ACM SIGGRAPH, 2004:548-558.
    
    [91] Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., M.Gross. Multi-scale capture of facial geometry and motion. ACM Trans. Graphics,2007, 26(3): Article No. 3.
    
    [92] Chai, J.-X., Xiao, J., Hodgins, J. Vision-based control of 3D facial animation. In:Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003: 193-206.
    
    [93] Deng, Z., Chiang, P., Fox, P., Neumann, U. Animating blendshape faces by cross-mapping motion capture data. In Proceedings of the Symposium on interactive 3D Graphics and Games, 2006: 43-48.
    
    [94] Bregler C, Covell M., Slaney M. Video rewrite: Driving visual speech with audio. In ACM SIGGRAPH, 1997: 353-360.
    
    [95] Cosatto E., Grap H. P. Photo-realistic talking-heads from image samples. IEEE Trans. on Multimedia, 2000, 2(3): 152-163.
    
    [96] Ezzat, T., Geiger, G., Poggio, T. Trainable videorealistic speech animation. In ACM SIGGRAPH, 2002: 388-398.
    
    [97] Chang, Y., Ezzat, T. Transferable videorealistic speech animation. In Proceedings of the ACM Siggraph/Eurographics Symposium on Computer Animation, 2005: 143-151.
    
    [98] Catmull E. The problems of computer-assisted animation. ACM SIGGRAPH Computer Graphics. 1978:348-353.
    
    [99] N. Burtnyk, M. Wein. Interactive Skeleton Techniques for Enhancing Motion Dynamics in Key Frame Animation. J. Communications of the ACM, 1976:564-569.
    
    [100] A. Kort. Computer Aided Inbetweening. In Proceedings of International Symp.Non-photorealistic Animation and Rendering, 2002: 125-132.
    
    [101] M. Xie. Feature matching and affine transformation for 2D cell animation. The Visual Computer, 1995, 11(8): 419-428.
    
    [102] J. S. Madeira, A. Stork, M. H. Groβ. An approach to computer-supported cartooning. The Visual Computer, 1996, 12(1): 1-17.
    
    [103] J. D. Fekete, E. Bizouarn, E. Cournarie T. Galas, F. Taillefer. TicTacToon: A Paperless System for Professional 2D Animation. In ACM SIGGRAPH, 1995:79-90.
    
    [104] C. Bregler, L. Loeb, E. Chuang, Hrishi Deshpande. Turning to the masters: motion capturing cartoons. In ACM SIGGRAPH, 2002: 399-407.
    
    [105] A. Schodl, R. Szelisk, D. H. Salesin and I. Essa. Video textures. In ACM SIGGRAPH, 2000:.489-498.
    
    [106] C. D. Juan, B. Bodenheimer. Cartoon Textures. In Proc. of Symp. Computer Animation, 2004: 267-276.
    
    [107] M. Gleicher. Retargeting Motion to New Characters. In ACM SIGGRAPH,1998:33-42.
    
    [108] J. Starck, A. Hilton. Virtual View Synthesis of People from Multiple View Video Sequences. Graphical Models, 2005, 67(6): 600-620.
    
    [109] J. Starck, G Miller, A. Hilton. Video-Based Character Animation. Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation,2005:49-58.
    
    [110] A. Schodl, LA. Essa. Controlled Animation of Video Sprites. Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, 2002:121-127.
    
    [111]J. Yu, Y. T. Zhuang, J. Xiao, C. Chen. Adaptive Control in Cartoon Data Reusing. Computer Animation and Virtual Worlds, 2007, 18(4-5), 571-582.
    
    [112] B. Galvin, B. McCane, K. Novins. Recovering motion fields: An evaluation of eight optical flow algorithms. Proceedings of the British Machine Vision Conference, 1998, 195-204.
    
    [113] P. J. Sloan, C. F. Rose and M. F. Cohen. Shape by Example. Proceedings of the symposium on Interactive 3D graphics, 2001: 135-143.
    
    [114] B. Allen, B. Curless, Z. Popovic. Articulated Body Deformation from Range Scan Data. In ACM SIGGRAPH, 2002: 612-619.
    
    [115] C. Rose, M. F. Cohen, Bobby Bodenheimer. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Applications, 1998, 18(5): 32-40.
    
    [116]H. J. Shin, J. Lee. Motion Synthesis and Editing in Low-dimensional Spaces. Computer Animation and Virtual Worlds, 2006, 17(3): 219-227.
    
    [117]L. Kovar, M. Gleicher. Flexible Automatic Motion Blending with Registration Curves. Proceedings of ACM SIGGRAPH/Eurographics symposium on Computer animation, 2003: 214-224.
    
    [118] Y. Li, T. S. Wang, H. Y. Shum. Motion Texture: A Two-level Statistical Model for Character Motion Synthesis. In ACM SIGGRAPH, 2002: 465- 472.
    
    [119] A. Bruderlin, L. Williams. Motion Signal Processing. In ACM SIGGRAPH,1995:97-104.
    
    [120] L. Kovar, M. Gleicher, F. Pighin. Motion Graphs. In ACM SIGGRAPH, 2002:473-482.
    
    [121] H. Mori and J. Hoshino. ICA-based interpolation of human motion. Proceedings of Computational Intelligence in Robotics and Automation, 2003: 453-458.
    [122]N.Ahmed,E.D.Aguiar,C.Theobalt.Automatic Generation of Personalized Human Avatars from Multi-view Video.Proceedings of the ACM symposium on Virtual reality software and technology,2005:257-260.
    [123]R.Plankers,P.Fua.Tracking and Modeling People in Video Sequences.Computer Vision and Image Understanding,2001,81(3):285-302.
    [124]A.R.Chowdhury,R.Chellappa,S.Krishnamurthy.3D Face Reconstruction from Video Using A Generic Model.Proceedings of the IEEE International Conference on Multimedia & Expo,2002:26-29.
    [125]E.Chuang,C.Bregler.Performance-Driven Facial Animation using Blendshape Interpolation.California:Stanford University,2002.
    [126]J.M.Buenaposada,E.Munoz,L.Baumela.Performance Driven Facial Animation by Appearance based Tracking.Proceedings of Iberian Conference on Pattern Recognition and Image Analysis,2005:476-483.
    [127]G.F.Zhang,X.Y.Qin,X.B.An.As-consistent-As-possible compositing of virtual objects and video sequences.Computer Animation and Virtual Worlds,2006,17(3-4):305-314.
    [128]G.Papagiannakis,S.Schertenleib,B.Kennedy.Mixing Virtual and Real scenes in the site of ancient Pompeii.Computer Animation and Virtual Worlds,2005,16(1):11-24.
    [129]M.J.Park,S.Y.Shin.Example-based motion cloning.Computer Animation and Virtual Worlds,2004,15(3-4):245-257.
    [130]罗忠祥,庄越挺,刘丰.基于时空约束的运动编辑和运动重定向.计算机辅助设计与图形学学报,2002,14(12):1146-1151.
    [131]I.Baran,J.Popovic.Automatic Rigging and Animation of 3D Characters.ACM Transactions on Graphics,2007,26(3):Article 72.
    [132]P.Glardon,R.Boulic,D.Thalmann.PCA-based walking engine using motion capture data.Proceedings of the Computer Graphics International,Crete,2004:292-298.
    [133]P.Glardon,R.Boulic,D.Thalmann.A Coherent Locomotion Engine Extrapolating beyond Experimental Data.Proceedings of Computer Animation and Social Agents,Geneva,2004:73-84.
    [134]J.Y.Noh,U.Neumann.Expression cloning.Computer Graphics Proceedings,Annual Conference Series,ACM SIGGRAPH,Los Angeles,2001:277-288.
    [135]H.Pyun,Y.Kim,W.Chae.An example based approach for facial expression cloning.Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation,San Diego,2003:167-176.
    [136]Yo X.Hu,D.L.Jiang,S.C.Yan.Automatic 3D Reconstruction for Face Recognition.Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition,Seoul,2004:843-848.
    [137]V.Blanz,T.Vetter.A morphable model for the synthesis of 3D faces.Computer Graphics Proceedings,Annual Conference Series,ACM SIGGRAPH,Los Angeles, 1999: 187-194.
    
    [138] I. Buck, A. Finkelstein, C. Jacobs. Performance-driven hand-drawn animation. Proceedings of the International symposium on Non-photorealistic animation and rendering, Annecy, 2000: 101-108.
    
    [139] A. Hornung, E. Dekkers, Leif. Kobbelt. Character Animation from 2D Pictures and 3D Motion Data. ACM Transactions on Graphics, 2007, 26(1): Article 1.
    
    [140] A. Hilton, D. Beresford, T. Gentils. Whole-body modeling of people from multi-view images to populate virtual worlds. Visual Computer, 2000,16(7):411-436.
    
    [141] W. Lee, J. Gu, N. M. Thalmann. Generating Animatable 3D Virtual Humans from Photographs. Proceedings of the annual conference of the European Association for Computer Graphics, Interlaken, 2000, 19(3): 1-10.
    
    [142] L. Ren, G Shakhnapovich, J. K. Hodgins, et al. Learning Silhouette Features for Control of Human Motion. ACM Transactions on Graphics, 2005, 24(4):1303-1331.
    
    [143] M. E. Brand, A. Hertzmann. Style Machines. Computer Graphics Proceedings,Annual Conference Series, ACM SIGGRAPH,New Orleans, 2000: 183-192.
    
    [144] D. W. Liang, Y. Liu, Q.M. Huang, GY. Zhu, S.Q. Jiang, Z.B. Zhang, Wen Gao.Video2Cartoon: Generating 3D Cartoon fromBroad-cast Soccer Video. Proc.13th Ann. ACM International Conf. Multi-media, 2005: 217-218.
    
    [145] Y. Li, F. Yu, Y. Q. Xu, E. Chang, H.Y. Shum. Speech-Driven Cartoon Animation with Emotions. Proc. 9th Ann. ACM International Conf. Multimedia, 2001:365-371.
    
    [146] H. Chen, N.N. Zheng, L. Liang, Y. Li, Y.Q. Xu, H.Y. Shum. PicToon: APersonalized Image-based Cartoon System. Proc. 11th Ann. ACM International Conf. Multimedia, 2002: 171-178.
    
    [147] A. Verma, L.V. Subramaniam, N. Rajput, C. Neti, T.A. Faruquie. Animating Expressive Faces across Languages. IEEE Trans. Multimedia, 2004, 6:791-800.
    
    [148] Rose C, Guenter B, Bodenheimer B. Efficient generation of motion transition using spacetime constraints. In: Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, New Orleans,Louisiana, 1996: 147-154.
    
    [149] Wang, J., B. Bodenheimer. An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments. ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003: 232-238.
    
    [150] J. Lee, J. X Chai, S. A. Reitsma, J. K. Hodgins, N. S. Pollard. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics,2002, 21(3): 491-500.
    
    [151] Wang, J., B. Bodenheimer. Computing the Duration of Motion Transitions: An Empirical Approach. ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2004: 337-346.
    [152] S. Park, H. J. Shin, T. H. Kim, S. Y. Shin. On-line motion blending for real-time locomotion generation. Computer Animation and Virtual Worlds, 2004, 15(3-4):125-138.
    
    [153] S. Park, H. J. Shin, S. Y. Shin. On-line locomotion generation based on motion blending. ACM SIGGRAPH Symposium on Computer Animation, 2002:105-111.
    
    [154] A. Golam, K. C. Wong. Dynamic time warp based framespace interpolation for motion editing. In Graphics Interface, 2000: 45-52.
    
    [155] Menardais, S., Multon, F., Kulpa, R., Arnaldi, B. Motion blending for real-time animation while accounting for the environment. In: Computer Graphics International, 2004: 156-159.
    
    [156] Z. Popovic, A.Witkin. Physically based motion transformation. In proceedings:SIGGRAPH, 1999: 11-20.
    
    [157] Zordan, V. B., Majkowska, A., Chiu, B., Fast, M. Dynamic Response for Motion Capture Animation. In Proceedings of ACM SIGGRAPH, 2005, 24(3): 697-701.
    
    [158] Victor B Zordan, Jessica K Hodgins. Motion capture-driven simulations that hit and react. In: Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, 2002: 89-96.
    
    [159]Bing Tang, Zhigeng Pan, Le Zheng,Mingmin Zhang. Interactive generation of falling motions. Computer Animation and Virtual Worlds, 2006, 17(3-4):271-279.
    
    [160] J. Mallios, N. Mehta, C. Street, O. C. Jenkins. Modular dynamic response from motion databases. In SIGGRAPH 2005 Posters, 2005.
    
    [161] C. Karen Liu, Aaron Hertzmann, Zoran Popovic. Learning Physics-Based Motion Style with Nonlinear Inverse Optimization. ACM Transactions on Graphics, 2005, 24(3): 1071-1081
    
    [162] Yeuhi Abe, C. Karen Liu, Zoran Popovic. Momentum-Based Parameterization of Dynamic Character Motion. Graphical Models, 2006, 68(2): 194-211.
    
    [163] C. Karen Liu, Zoran Popovic. Synthesis of Complex Dynamics Character Motion from Simple Animation. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, 2002: 408-416.
    
    [164]Okan Arikan, D. A. Forsyth. Interactive motion generation from examples. In ACM SIGGRAPH, 2002: 483-490.
    [165]刘丰.基于运动捕获数据的若干动画技术研究.浙江大学硕士学位论文.2004年.
    
    [166] M. Alexa, W. Muller. Representing animations by principal components. In Computer Graphics Forum, 2000, 19(3): pp 411-426.
    
    [167] A. Safonova, J.K. Hodgins, N. Pollard. Synthesizing Physically Realistic Human Motion in Low-Dimensional, Behavior-Specific Spaces. ACM Transactions on Graphics, 2004, 23(3).
    
    [168] R. McDonnell, F. Newell, C. O'Sullivan. Smooth Movers: Perceptually guided human motion simulation.Proc.6th Symp.Computer Animation,2007:259-269.
    [169]R.Liu,P.Alton,A.A.Efros,J.K.Hodgins.A data-driven approach to quantifying natural human motion.ACM Trans.Graphics,2005,24(3):1090-1097.
    [170]Vapnik V.N.Statistical Learning Theory.Wiley,New York,1998.
    [171]Juan C.D.,Bodenheimer B.Re-using Traditional Animation:Methods for Semi-Automatic Segmentation and Inbetweening.In Proceedings of Eurographics/ACM SIGGRAPH Symposium on Computer Animation,2006:223-232.
    [172]Cheng H.D.,Sun Y.A hierarchical approach to color image segmentation using homogenity.IEEE Trans.Image Processing,2000,9(12):2071-2082.
    [173]Marr D.and Hildreth E.C.,Theory of edge detection.In Proceedings of Royal Society,207,pp.187-217,1980.
    [174]X.C.He and N.C.Yung.Curvature Scale Space Corner Detector with Adaptive Threshold and Dynamic Region of Support.Proc.17th In-ternational Conf.Pattern Recognition,Vol.2,pp.791-794,2004.
    [175]Huttenlocher D.P.,Klanderman G.A.and Rucklidge W.J.Comparing images using the Hausdorff distance.IEEE Trans.Pattern Analysis and Machine Intelligence,15,9(1993),850-863.
    [176]Tenenbaum J.B.,Silva V.D.and Langford J.C.A Global Geometric Framework for Nonlinear Dimensionality Reduction.Science 290(2000),2319-2323.
    [177]F.Crestani.Application of spreading activation techniques in information retrieval.J.Artificial Intelligence Review,1997,11(6):453-582.
    [178]姚敏,黄燕君.模糊-致关系及其应用.电子科技大学学报,1997,26(6):632-635.
    [179]A.Criminisi,I.Reid,A.Zisserman.Single view metrology.J.Computer Vision.2000,40(2):123-148.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700