智能人体动画若干关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着光学运动捕获设备的应用,高效、快捷的获取真实感三维人体运动数据已成为现实并被成功应用于计算机动画、视频游戏、影视特技、运动仿真及虚拟现实等众多领域,且近年来已经出现较大规模的商用/研究用人体运动捕获数据库。随着运动数据获取手段日趋成熟,计算机动画研究人员的研究重点逐渐转移到如何对已有的三维人体运动捕获数据进行分析、管理与重用上,并开始关注如何开发新的技术用于自动化、智能化的三维人体动画创作。
     本文的工作就是探索三维人体运动捕获数据的自动化、智能化分析、处理与重用方法,并在此基础之上研究三维人体动画创作的智能技术。本文提出了人体运动数据分割、运动数据抽象、基于关键帧的三维运动数据检索、风格化人体运动数据自动编辑与生成、基于捕获数据集与运动脚本的人体运动合成引擎、基于运动规划的人体动画自动生成等技术。
     为了将较长运动捕获序列中所包含的不同类型运动数据分割开来,首先将原始空间中结构复杂(在原始高维空间中,运动数据样本点分布复杂,形成扭曲甚至折叠的高维流形)的人体运动数据通过非线性流形降维技术投影到结构相对简单的低维流形上,进而采用聚类算法对其进行自动分割。
     为了对原始运动数据序列进行抽象表示,提出骨骼夹角八元组特征表示方法,进而提出一种改进的分层曲线简化算法提取关键帧数据。实验结果表明该算法不仅能够对原始运动数据序列进行压缩和抽象表示,同时还能保持相似运动序列关键帧集合之间的一致性,并且在运动数据压缩、基于关键帧的运动数据检索、合成等方面有着进一步的应用。
     在三维人体运动数据管理方面,提出一种基于关键帧的三维运动检索技术,通过在运动数据序列关键帧集合之间构建距离矩阵对其进行相似度比较,能够实现基于内容的三维人体运动数据检索。该算法的特点在于计算简单、运算效率高,不需要事先建立索引结构,适用于增量式的三维人体运动数据库。
     在三维人体运动数据重用方面,提出一个自动框架对风格化人体运动数据进行实时、定量的生成与编辑。主成分分析理论(Principle Component Analysis,PCA)用来将人体运动数据映射到子空间,在最大程度保持原始数据特征的条件下降低了计算复杂度。风格化运动生成与编辑算法在PCA子空间中得到应用并产生出新的风格化人体运动数据。为了解决实际人体运动存在多风格融合的问题,还提出一种新颖的方法用于多风格人体运动数据的生成与编辑。
     为了基于用户指令高效、快速合成三维人体动画序列,本文提出一个基于运动捕获数据集的真实感三维人体运动合成引擎,通过定义标准XML格式的运动脚本,引擎能够从运动捕获数据库中获取相应的运动数据片断合成最终的三维人体运动序列。在这个灵活的引擎框架下,用户可以自己提供特定的运动数据集合并定义相关的运动元素表,实验表明该引擎架构可以用于计算机游戏、动画制作系统、运动仿真及虚拟现实等应用中。
     为了生成特定虚拟场景中的角色动画,提出了一个基于运动规划的人体动画自动生成框架。给定一个虚拟场景,动画师手工指定角色运动的起点与终点,系统可以自动或者交互式的为动画角色规划运动路径和进行行为选择,将结果保存为运动脚本形式,然后基于脚本信息从已有运动捕获数据库中提取相应行为片断并合成为最终动画序列。
     最后在第九章中,对本文的研究工作进行了总结和展望。
Due to the popularity of optical motion capture system, more and more realistichuman motion data can be acquired easily. In recent years, large and highly detailedhuman motion database is commercially available and widely used in variousapplications such as video games, animation films, sports simulation and virtualreality. Therefore, many researchers have been focused on how to edit, manipulate,reuse the existing motion data, and develop new techniques for producing humananimation automatically and intelligently.
     The work of this thesis is to explore automatic and intelligent method for analyzing,managing and reusing motion capture data, and try to develop some automatic andintelligent approaches to produce 3D human animations efficiently. This thesispresents the following algorithms, including automatic motion segmentation,keyframe extraction from human motion sequence, motion retrieval based onkeyframes, automatic synthesis and editing of motion styles, a script engine forrealistic human movement generation based on MoCap data, and automatic humanmovement generation based on motion programming.
     A novel method was proposed to get primitive actions from long MoCap sequenceefficiently. Original motion sequences lie on a high-dimensional manifold which ishighly folded and twisted, so it is difficult to cluster the similar poses together to formdistinct primitive actions. Here we use a non-linear dimensionality reductiontechnique to map original motion sequences into low-dimensional manifold, and thenclustering techniques are applied to segment primitive actions apart.
     We propose a keyframe extraction method based on a novel layered curvesimplification algorithm for motion capture data. Bone angles are employed as motionfeatures and keyframe candidates can be selected based on them. After that, thelayered curve simplification algorithm will be used to refine those candidates and thekeyframe collection can be gained. The experiments demonstrate that our method cannot only compress and summarize the motion capture data efficiently, but also keepthe consistency of keyframe collection between similar human motion sequences,which is of great benefit to further motion data retrieval or editing.
     Chapter 5 introduces a novel motion retrieval approach based on keyframes. Whenmotion retrieval command is issued, the distance matrix is constructed betweenkeyframe set of query example and that of a motion from database. Then thesimilarity between them is calculated based on this distance matrix. Comparing tomost existing content-based motion retrieval approaches our method possesses bettertime efficiency performance without dependence on the precomputed indexingstructure and preset parameters, which is a preference for the incremental motiondatabase.
     We propose a framework for automatic, real-time and quantitative synthesis andediting of human motion styles. In this framework Principle Component Analysistheory is used to map original styled human motions into subspaces, which can reduce computational complexity while reserving the intrinsic properties of original data.Synthesis and editing methods are applied in such subspaces and then motions withnew styles can be reconstructed. As realistic human motions may have multiple styles,we also present a novel method to synthesize and edit motions with multiple styles.
     Chapter 7 proposes a script engine framework for realistic human movementgeneration based on well-organized MoCap database. Users can make or edit motionscripts which describe the human movement type, order, and details. Then the scriptsequence are decomposed into sequential commands which are used to retrieve propermotion clips from MoCap database and generate final movement sequence.Furthermore, users can define their own motion elements table and scripts in thisflexible script engine framework according to various MoCap data sets. Theexperiment result shows that this script engine framework can achieve goodperformance and can be used as human motion engine in various applications, such ascomputer game, animation production, sports simulation and virtual reality.
     Chapter 8 proposes a framework to program the movements of characters andgenerate navigation animations in virtual environment. Given a virtual environment, avisual user interface is provided for animators to interactively generate motion scripts,describing the characters' movements in this scene and finally used to retrieve motionclips from MoCap database and generate navigation animations automatically. Thisframework also provides flexible mechanism for animators to get varied resultinganimations by configurable table of motion bias coefficients and interactive visualuser interface.
     In chapter 9, we give a conclusion of this thesis and discuss the future work.
引文
[1] Daniel Thalmann, "Motion Control: From Keyframe to Task-Level Animation", pp3-17, 1985.
    [2] 金小刚,“计算机动画基础算法研究”,浙江大学博士学位论文,1995.10
    [3] Guenter, B. and Parent, R., Computing the arc length of parametric curves, IEEE Computer Graphics & Applications, Vol. 10, No. 3, 1990, PP.72-78.
    [4] Watt, A. and Watt, M., Advanced animation and rendering techniques,Addison-Wesley Publishing Company, 1992.
    [5] Brotman, L.S., Netravali, A.N., Motion interpolation by optimal control,Computer Graphics, Vol.22, No.4, 1988, PP. 179-188.
    [6] Steketee, S. N. and Badler, N. I., Parametric keyframe interpolation incorporating kinetic adjustment and phrasing control, Computer Graphics, Vol. 19, No. 3, 1985, PP.255-262.
    [7] Kochanek, D.H.U. and Bartels, R.H., Interpolating splines with local tension, continuity and bias control, Computer Graphics, Vol. 18, No.3, 1984, PP.245-254.
    [8] Shoemake, K., Animating rotation with quaternion curves, Computer Graphics, Vol. 19, No.3, 1985, PP.245-254.
    [9] Duff, T., Splines in animation and modeling, SIGGRAPH'86 Course#15: State of the art in image synthesis.
    [10] Pletincks, D., Quaternion calculus as a basic tool in computer graphics, The Visual Computer, Vol.5, 1989, PP.2-13.
    [11] Barr, A.H., Currin, B., Gabriel, S. and Hughes, J.F., Smooth interpolation of orientations with angular velocity constraints using quaternions, Computer Graphics, Vol. 26, No. 2, 1992, PP.313-320.
    [12] Denavit J,Hartenberg R S.A kinematic notation for lower pair mechanisms based on matrices[J].ASME J Applied Mechanics, 1995,22:215~221.
    [13] Korein, J.U. and Badler, N.I., Techniques for generating the goal-directed motion of articulated structures, IEEE Computer Graphics & Applications, Vol.2, No.3, 1982, PP.71-81.
    [14] Girard, M. and Maciejewski, A.A., Computational modeling for the computer animation of legged figures, Computer Graphics, Vol.19, No.3, 1985, PP.263-270.
    [15] Girard, M., Interactive design of 3D computer-animated legged animal motion, IEEE Computer Graphics & Applications, Vol.7, No.6, 1987, PP.39-51.
    [16] 朱登明 王兆其 石敏.一种基于逆运动学的三维虚拟人体运动生成方法[J].计算机研究与发展,2002(39):33-39.
    [17] David J.Sturman. A brief history of motion capture for computer character animation. Character motion system, SIGGRAPH 94: Course 9.
    [18] T. W. Calvert, J. Chapman and A. Patla, "Aspects of the kinematic simulation of human movement," IEEE Computer Graphics and Applications, Vol. 2, No. 9, November 1982, pp. 41-50.
    [19] Carol M. Ginsberg and Delle Maxwell, "Graphical marionette," Proc. ACM SIGGRAPH/SIGART Workshop on Motion, ACM Press, New York, April 1983, pp. 172-179.
    [20] Barbara Robertson, "Mike, the talking head," Computer Graphics World, July 1988, pp. 15-17.
    [21] Graham Walters, "The story of Waldo C. Graphic," Course Notes: 3D Character Animation by Computer, ACM SIGGRAPH '89, Boston, July 1989, pp. 65-79.
    [22] Graham Walters, "Performance animation at PDI," Course Notes: Character Motion Systems, ACM SIGGRAPH 93, Anaheim, CA, August 1993, pp. 40-53.
    [23] Jeff Kleiser, "Character motion systems," Course Notes: Character Motion Systems, ACM SIGGRAPH 93, Anaheim, CA, August 1993, pp. 33-36.
    [24] Herve Tardif, "Character animation in real time," Panel: Applications of Virtual Reality I: Reports from the Field, ACM SIGGRAPH Panel Proceedings, 1991.
    [25] Barbara Robertson, "Moving pictures," Computer Graphics World, Vol. 15, No. 10, October 1992, pp. 38-44.
    [26] Herve Tardif, "Character animation in real time," Panel: Applications of Virtual Reality I: Reports from the Field, ACM SIGGRAPH Panel Proceedings, 1991.
    [27] http://www.animazoo.com/products/gypsy5.htm
    [28] Dyer, S., Martin, J., Zulauf, J., Motion Capture White Paper. Technical Report. Silicon Graphics, December 12, 1995
    [29] http://www.motionanalysis.com
    [30] http://www.vicon.com
    [31] http://www.disontech.com.cn/products/DMC.htm
    [32] http ://www.dorealsoft.com/product/?category=17
    [33] GJohansson. Visual motion perception. Sci. American, 232(6):76-88,1975.
    [34] R.F.Rashid. Towards a system for the interpretation of moving light display. IEEE Trans. On PAMI, 2(6): 574-581, November 1980.
    [35] J.A.Webb and J.K.Aggarwal. Visually interpreting the motion of objects in space. IEEE Computer, pages 40-46, August 1981.
    [36] J.A.Webb and J.K. Aggarwal. Structure from motion of rigid and jointed objects. In Artificial Intelligence, volume 19, pages 107-130,1982.
    [37] Moeslund T B,Granum E.A survey of computer vision-based human motion capture[J]. Computer Vision and Image Understanding,2001,81(3):231-268
    [38] Thomas B. Moeslund, Adrian Hilton and Volker Kruger, A survey of advances in vision-based human motion capture and analysis, Computer Vision and Image Understanding, 104(2006), pp. 90-126
    [39] M. Gleicher and N. Ferrier. Evaluating video-based motion capture. In Proceedings of Computer Animation 2002
    [40] J.Segen and S.Pingali. A camera-based system for tracking people in real time. In Proc. Of Intl. Conf. On Pattern Recognition, pages 63-67, Vienna, Austria, August 1996.
    [41] Christoph Bregler, Jitendra Malik, Video Motion Capture, Computer Graphics, 1997
    [42] 劳志强.基于形象思维的智能动画技术的研究.浙江大学博士论文.1997年
    [43] 刘小明,基于视频的人体动画技术的研究,浙江大学硕士学位论文,1999年
    [44] 朱强,视频流中人体运动信息的提取和再生技术,浙江大学硕士学位论文,2002年
    [45] 罗忠祥,视频流中人体运动提取与运动合成,浙江大学博士学位论文,2002年
    [46] Jianhui Zhao and Ling Li. Human motion reconstruction from monocular images using genetic algorithms. In Computer Animation and Virtual Worlds 2004; 15:407-414.
    [47] Yisheng Chen, Jinho Lee, Rick Parent, Raghu Machiraju. Markerless Monocular Motion Capture Using Image Features and Physical Constraints. In Computer Graphics International 2005
    [48] Antonio S. Micilotta, Eng Jon Ong and Richard Bowden. Real-time Upper Body 3D Pose Estimation from a Single Unealibrated Camera. In Eurographics 2005.
    [49] D.E. DiFranco, T. Cham, and J.M. Rehg. Reconstruction of 3-D Figure Motion from 2-D Correspondences. In Proc. Conf. Computer Vision and Pattern Recognition, pages 307-314, 2001.
    [50] Cheng Chen, Yueting Zhuang, Jun Xiao. Towards Robust 3D Reconstruction of Human Motion from Monocular Video. In ICAT 2006. LNCS V.4282 P.594-603
    [51] Ankur Agarwal and Bill Triggs. Recovering 3D Human Pose from Monocular Images. In IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 1, January 2006.
    [52] Ahmed Elgammal and Chan-Su Lee. Inferring 3D Body Pose from Silhouettes Using Activity Manifold Learning. In 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'04)-Volume 2 pp. 681-688.
    [53] V. Athitsos and S. Sclaroff. Estimating 3D Hand Pose from a Cluttered Image. In Proc. Int'l Conf. Computer Vision, 2003.
    [54] G. Shakhnarovich, P. Viola, and T. Darrell. Fast Pose Estimation with Parameter Sensitive Hashing. In Proc. Int'l Conf. Computer Vision, 2003.
    [55] M. Cheung, T. Kanade, J.-Y. Bouguet, and M. Holler. A real time system for robust 3D voxel reconstruction of human motions. In CVPR2000, V2, P714—720.
    [56] K.M. Cheung, S. Baker, and T. Kanade, Shape-From-Silhouette of Articulated Objects and its Use for Human Body Kinematics Estimation and Motion Capture, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June, 2003.
    [57] K.M. Cheung, Visual Hull Construction, Alignment and Refinement for Human Kinematic Modeling, Motion Tracking and Rendering, doctoral dissertation, tech. report CMU-RI-TR-03-44, Robotics Institute, Carnegie Mellon University, October, 2003
    [58] K.M. Cheung, S. Baker, J.K. Hodgins, and T. Kanade, Markerless Human Motion Transfer, Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission, September, 2004
    [59] K.M. Cheung, S. Baker, and T. Kanade, Shape-From-Silhouette Across Time Part I: Theory and Algorithms, International Journal of Computer Vision, Vol. 62, No. 3, May, 2005, pp. 221-247.
    [60] K.M. Cheung, S. Baker, and T. Kanade, Shape-From-Silhouette Across Time: Part II: Applications to Human Modeling and Markerless Motion Tracking, International Journal of Computer Vision, Vol. 63, No. 3, August, 2005, pp. 225 -245.
    [61] http ://www.vicon.com/products/peakmotus.html
    [62] http://mocap.cs.cmu.edu/
    [63] Jessica K. Hodings and Nancy S. Pollard, Adapting simulated behaviors for new characters. In: SIGGRAPH'97 Conference Proceedings. Los Angeles. California USA , August 1997: 153—162.
    [64] Michael Gleicher and Peter Litwinowicz. Constraint-Based Motion Adaptation. Advanced Technology Group. June 14, 1996. Apple TR 96-153
    [65] Michael Gleicher, Retargeting motion to new characters. In SIGGRAPH 98Conference Proceedings, Annual Conference Series, pages 33-42.
    [66] K. Choi and H. S. Ko, (1999), "On-line Motion Retargetting", Proceedings of the 7 th Pacific Conference on Computer Graphics and Applications, 32-42
    [67] Min Je Park and Sung Yong Shin.Example based motion cloning.Computer Animation and Virtual Worlds 2004; 15:245-257
    [68] Ming-Kei Hsieh, Bing-Yu Chen, and Ming Ouhyoung, Motion Retargetting and Transition in Different Articulated Figures. Proceedings of 2005 International Conference on Computer Aided Design and Computer Graphics (CAD/Graphics05), p.457 - p.462, Hong Kong, China, 2005.
    [69] Michael Gleicher. Comparing Constraint-Based Motion Editing Methods. Graphical Models, Vol 63, 2001, pp.107-134.
    [70] A. Bruderlin and L. Williams. Motion signal processing. In proceedings: SIGGRAPH 95, Aug. 1995, pp. 97-104.
    [71] Munetoshi Unuma, Ken-ichi Anjyo, Ryozo Takeuchi. Fourier principles for emotion-based human figure animation. In proceedings: SIGGRAPH 95, Los Angeles, 1995, pp. 91-96.
    [72] 刘丰,庄越挺,罗忠祥,潘云鹤。基于小波变换的运动分析及其应用。中国图象图形学报A,Vol.8,No.1,2003:68-76。
    [73] 刘丰,基于运动捕获数据的若干动画技术研究,浙江大学硕士学位论文,2004年
    [74] R. N. Bindiganavale. Building parameterized action representations from observation. Ph.D. thesis, University of Pennsylvania, 2000. [Appears as Technical Report MS-CIS-00-17].
    [75] A. Witkin and Z. Popovi'c. Motion warping. In proceedings: SIGGRAPH 95, Aug. 1995, pp. 105-108.
    [76] S. Park Kwangjin Choi and H. Ko. Processing motion capture data to achieve positional accuracy. Graphical Models Image Process, 61(5), 1999, pp.260-273.
    [77] Michael Gleicher, Motion Path Editing, Symposium on Interactive 3D Graphics, 2001
    [78] A.Witkin and M. Kass. Spacetime constraints. Computer Graphics (SIGGRAPH '88 Proceedings), Aug. 1988, Vol. 22, pp. 159-168.
    [79] M. F. Cohen. Interactive spacetime control for animation. Computer Graphics (Proceedings of SIGGRAPH 92), 1992, 26(2), pp.293-302.
    [80] M. Gleicher and P. Litwinowicz. Constraint-based motion adaptation. Journal of Visualization and Computer Animation, Vol. 9, 1998, pp.65-94.
    [81] M. Gleicher. Motion editing with spacetime constraints. In proceedings: 1997 Symposium on Interactive 3D Graphics, Apr. 1997, pp. 139-148.
    [82] Jehee Lee, Sung Yong Shin. A hierarchical approach to interactive motion editing for human-like figures. The 26th Annual Conf on Computer Graphics, Los Angeles, 1999
    [83] Charles Rose, Michael F. Cohen and Bobby Bodenheimer. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Applications, v. 18, no.5, pp.32-40, 1998
    [84] Rose C, Guenter B and Bodenheimer B, "Efficient generation of motion transition using spacetime constraints", In: Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, New Orleans,Louisiana, 1996.147~154
    [85] Wang, J., and B. Bodenheimer, "'An Evaluation of a Cost Metric for Selecting Transitions between Motion Segments", 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 232-238, San Diego, CA, July 2003.
    [86] Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics, 21(3):491.500, July 2002. ISSN 0730-0301 (Proceedings of ACM SIGGRAPH 2002).
    [87] Wang, J., and B. Bodenheimer, "Computing the Duration of Motion Transitions: An Empirical Approach", 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 337-346, Grenoble, France, August 2004.
    [88] Sang II Park, Hyun Joon Shin, Tae Hoon Kim and Sung Yong Shin, On-line motion blending for real-time locomotion generation, Computer Animation and Virtual Worlds, 15: 125-138, 2004
    [89] Sang II Park, Hyun Joon Shin and Sung Yong Shin On-line locomotion generation based on motion blending. ACM SIGGRAPH Symposium on Computer Animation, pages 105-111, 2002
    [90] A. Golam and K. C. Wong. Dynamic time warp based framespace interpolation for motion editing. In Graphics Interface, pages 45-52, May 2000.
    [91] Menardais, S., Multon, F, Kulpa, R. and Arnaldi, B., Motion blending for real-time animation while accounting for the environment, in: Computer Graphics International, 2004. Proceedings, page(s): 156-159,2004
    [92] L. Kovar and M. Gleicher, "Flexible automatic motion blending with registration curves", In Proceeding of ACM SIGGRAPH 2003, ACM Press, San Diego, California, USA, pp. 214-224, 2003.
    [93] Z. Popovic and A. Witkin. Physically based motion transformation. In proceedings: SIGGRAPH 99, Aug. 1999, pp. 11-20.
    [94] Zordan, V. B., Majkowska, A., Chiu, B., and Fast, M. Dynamic Response for Motion Capture Animation, In Proceedings of ACM SIGGRAPH 2005, 24(3): 697-701,2005
    [95] Victor B Zordan,Jessica K Hodgins. Motion capture-driven simulations that hit and react[C].In: 2002 ACM, 2002: 89-96
    [96] Bing Tang, Zhigeng Pan, Le Zheng, and Mingmin Zhang, "Interactive generation of falling motions", Computer Animation and Virtual Worlds, V.17, Issue: 3-4, pp. 271-279,2006
    [97] J. Mallios, N. Mehta, C. Street, and O. C. Jenkins. Modular dynamic response from motion databases. In SIGGRAPH 2005 Posters, 2005
    [98] C. Karen Liu, Aaron Hertzmann, and Zoran Popovic, "Learning Physics-Based Motion Style with Nonlinear Inverse Optimization", ACM Transactions on Graphics (SIGGRAPH). Volume 24, Number 3, Pages 1071-1081,2005.
    [99] Yeuhi Abe, C. Karen Liu and Zoran Popovic, "Momentum-Based Parameterization of Dynamic Character Motion", Graphical Models, V.68, Issue2 (Special issue on SCA 2004), pp.194-211, 2006
    [100] C.Karen Liu and Zoran Popovic, "Synthesis of Complex Dynamics Character Motion from Simple Animation", In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp.408-416, 2002.
    [101] L. Kovar, M. Gleicher, and F. Pighin, "Motion graphs", In Proceeding of ACM SIGGRAPH 2002, ACM Press, San Antonio, Tx, Usa, pp. 473-482,2002.
    [102] Michael Gleicher, Hyun Joon Shin, Lucas Kovar and Andrew Jepsen. Snap together motion: Assembling run-time animation. In proceedings: 2003 Symposium on Interactive 3D Graphics, 2003, pp. 181-188.
    [103] Okan Arikan and D. A. Forsyth. Interactive motion generation from examples. In proceedings: SIGGRAPH 2002, San Antonio, Texas, 2002, pp.483-490.
    [104] M. Alexa and W. M'uller. Representing animations by principal components. In M. Gross and F. R. A. Hopgood, editors, Computer Graphics Forum (Eurographics 2000), volume 19(3), pages 411-426, 2000.
    [105] Mori, H., Hoshino, J., ICA-based interpolation of human motion, In: Computational Intelligence in Robotics and Automation, 2003. Proceedings, Volume: 1, On page(s): 453- 458 vol.1,2003
    [106] Glardon, P., Boulic, R., Thalmann, D., PCA-based walking engine using motion capture data, in: Computer Graphics International, 2004. Proceedings, On page(s): 292- 298,2004
    [107] Pascal Glardon, Ronan Boulic, Daniel Thalmann, A Coherent Locomotion Engine Extrapolating beyond Experimental Data, In: Proceedings of CAS A 2004
    [108] 肖俊, 庄越挺, 陈成, 吴飞, “风格化人体运动数据的自动生成与编辑”, CIDE 2006
    [109] A. Safonova, J.K. Hodgins, and N. Pollard, Synthesizing Physically Realistic Human Motion in Low-Dimensional, Behavior-Specific Spaces, ACM Transactions on Graphics (SIGGRAPH 2004), Vol. 23, No. 3, August, 2004
    [110] Hyun Joon Shin and Jehee Lee, Motion synthesis and editing in low-dimensional spaces, Computer Animation and Virtual Worlds, 17: 219-227, 2006
    [111] Lucas Kovar and Michael Gleicher. Automated Extraction and Parameterization of Motions in Large Data Sets. Proceedings SIGGRAPH 2004.. August 2004.
    [112] Tomohiko Mukai and Shigeru Kuriyama, Geostatistical motion interpolation, In: Proceedings of ACM SIGGRAPH 2005, 1062-1070, 2005
    [113] Amr Ahmed, Farzin Mokhtarian, and Adrian Hilton. Parametric motion blending through wavelet analysis. In: Proceedings of Eurographics 2001, pages 347-353, 2001.
    [114] Rachel Heck and Michael Gleicher. Parametric Motion Graphs. 2006 Symposium on Computer Animation (Poster Presentation). September 2006
    [115] Jehee Lee, Jinxiang Chai, Paul Reitsma, Jessica Hodgins, and Nancy Pollard, Interactive Control of Avatars Animated with Human Motion Data, ACM Transactions on Graphics (SIGGRAPH 2002), volume 21, number 3, 491-500, July 2002.
    [116] Jinxiang Chai and Jessica K. Hodgins. Performance Animation from Low-dimensional Control Signals. In ACM Transactions on Graphics (SIGGRAPH 2005).
    [117] L. Ren, G. Shakhnarovich, J. Hodgins, H. Pster, and P. Viola. Learning Silhouette Features for Control of Human Motion. In Proceedings of the SIGGRAPH 2004 Conference on Sketches & Applications, August 2004.
    [118] Takaaki Shiratori, Atsushi Nakazawa, Katsushi Ikeuchi, Dancing-to-Music Character Animation, In Computer Graphics Forum, Vol. 25, No.3 (Also in Eurographics 2006), September 2006
    [119] Hyun-Chul Lee and In-Kwon Lee, Automatic Synchronization of Background Music and Motion in Computer Animation, Computer Graphics Forum Volume 24, Issue 3 (2005) pp. 353-362
    [120] Taku Komura and Wai-Chun Lam, Real-time locomotion control by sensing gloves, Computer Animation and Virtual Worlds, 2006(17): 513-525
    [121] Z. Wang and M. van de Panne, "Walk to here": A Voice Driven Animation System, Eugraphics/ACM SIGGRAPH Symposium on Computer Animation 2006
    [122] 郝冰,付雷,浅析电影《指环王:护戒使者》中智能动画技术的应用,《影视技术》,2003(6):18-22,2003
    [123] Teknomo, K. 2002. Microscopic Pedestrian Flow Characteristics: Development of an Image Processing Data Collection and Simulation Model. Ph.D. Dissertation. Department of Human Social Information Sciences, Graduate School of Information Sciences, Tohoku University, Japan, March.
    [124] Gipps, G., and Marksjo, B. 1985. A micro-simulation model for pedestriain flows. Mathemathics and Computers in Simulation 27,95-105.
    [125] Blue, V., and Adler, J. 1998. Emergent fundamental pedestrian flows from cellular automata microsimulation. Transportation Research Record 1644, 29-36.
    [126] Blue, V., and Adler, J. 2000. Cellular automata model of emergent collective bi-directional pedestrian dynamics. In Artificial Life Ⅶ: Proc. of the Seventh International Conference on Artifical Life, 437-445.
    [127] Okazaki, S. 1979. A study of pedestrian movement in architectural space, part 1: Pedestrian movement by the application on of magnetic models. Transactions of Architectural Institute of Japan 283, 111-119.
    [128] Helbing, D., and Molnar, P. 1995. Social force model for pedestrian dynamics. Physical Review 51 (5), 4282-4286.
    [129] Watts, J. 1987. Computer models for evacuation analysis. Fire Safety Journal 12, 237-245.
    [130] Lovas, G. G. 1993. Modeling and simulation of pedestrian traffic flow. In Modeling and Simulation: Proceedings of 1993 European Simulation Multiconference.
    [131] Thompson, P., and Marchant, E. 1995. A computer model for the evacuation of large building populations. Fire Safety Journal 24,131-148.
    [132] Schreckenberg, M., and Sharma, S. 2001. Pedestrian and Evacuation Dynamics. Springer-Verlag.
    [133] 刘丰,庄越挺,罗忠祥,潘云鹤,基于多自主智能体的群体动画创作,《计算机研究与发展》,41(1):104-110,2004
    [134] Adrien Treuille, Seth Cooper, and Zoran Popovic, "Continuum Crowds", In Proceedings of ACM SIGGRAPH 2006, pp.1160-1168,2006
    [135] Wei Shao, Animating Autonomous Pedestrians, Doctor dissertation of New York University, 2006
    [136] Sung M, Kovar L, and Gleicher M, "Fast and Accurate Goal-directed Motion Synthesis for Crowds", In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer Animation, ACM Press, New York, NY, USA, pp.291-300, 2005
    [137] MUSSE S. R., and THALMANN D, "Hierarchical model for real time simulation of virtaul human crowds", In IEEE Transaction on Visualization and Computer Graphics, V.7, Issue.2, pp. 152-164,2001.
    [138] Massive Software, Inc., 2005. 3d animation system for crowd-related visual effects, http://www.massivesofrware.com.
    [139] Lamarche, F., and Donikian, S. 2004. Crowd of virtual humans: a new approach for real time navigation in complex and structured environments. Comput. Graph. Forum 23, 3, 509-518.
    [140] SUNG M., CHENNEY S. and GLEICHER M.: Scalable behaviors for crowd simulation. In Computer Graphics Forum (EUROGRAPHICS '04) (2004), vol. 23, pp. 519-528.
    [141] Reich, B. D., Ko, H., Becket, W., and Badler, N. I. 1994. Terrain reasoning for human locomotion. In Proc. Computer Animation '94. 77-82.
    [142] PETTRE, J., LAUMOND, J.-P., AND SIMEON, T. 2003. A 2-stages locomotion planner for digital actors. In SCA '03: Proceed- ings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 258-264.
    [143] Choi M.G, Lee L. and Shin S.Y. Planning biped locomotion using motion capture data and probabilistic roadmaps. ACM Transactions on Graphics (TOG), v.22 n.2, p. 182-203, April 2003
    [144] Manfred Lau and James Kuffner. Behavior Planning for Character Animation, ACM SIGGRAPH / Eurographics Symposium on Computer Animation, Los Angeles, CA, 2005.
    [145] Manfred Lau and James Kufmer. Precomputed Search Trees: Planning for Interactive Goal-Driven Animation, ACM SIGGRAPH / Eurographics Symposium on Computer Animation, Vienna, Austria, 2006.
    [146] Katsu Yamane, James Kuffner, and Jessica K. Hodgins, Synthesizing Animations of Human Manipulation Tasks, In: Proceedings of ACM SIGGRAPH 2004
    [147] Kang Hoon Lee, Myung Geol Choi and Jehee Lee, "Motion Patches: Building Blocks for Virtual Environments Annotated with Motion Data", ACM Transactions on Graphics 25(3), pp.898-906, 2006.
    [148] M. E. Brand and V. Kettnaker. Discovery and segmentation of activities in video. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 22(8):844-851,2000.
    [149] G. Bradski and J. Davis: Motion Segmentation and Pose Recognition with Motion History Gradients, International Journal of Machine Vision and Applications, Vol. 13, No. 3, pp. 174-184,2002.
    [150] M. Pomplun and M. J. Mataric. Evaluation metrics and results of human arm movement imitation, In Proceedings of the First IEEE-RAS International Conference on Humanoid Robots (Humanoids-2000), MIT, Cambridge, MA, 2000.
    [151] A. Fod, M. J. Mataric, and O. C. Jenkins. Automated derivation of primitives for movement classification. Autonomous Robots, 12(1):39-54, 2002.
    [152] Wang, T., Shum, H., Xu, Y., Zheng, N. , "Unsupervised Analysis of Human Gestures", IEEE Pacific Rim Conference on Multimedia: 174-181, 2001
    [153] Okan Arikan, David A. Forsyth, James O'Brien. Motion Synthesis from Annotations. ACM Transactions on Graphics (ACM SIGGRAPH 2003), Vol: 33, No: 3, pp 402-408, 2003.
    [154] Kahol, K., Tripathi, P., Panchanathan, S., Rikakis, T., Gesture segmentation in complex motion sequences, ICIP03(II: 105-108).
    [155] Kahol, K., Tripathi, P., Panchanathan, S., Automated Gesture Segmentation From Dance Sequences, IEEE Int. Conf. Face and Gesture Recognition, pp 883-888, 2004
    [156] ChunMei Lu, and Nicola J. Ferrier: Repetitive Motion Analysis: Segmentation and Event Classification, IEEE Transactions of Pattern Analysis and Meachine Intelligence, Vol,26, No.2, pp.258-263,2004.
    [157] Jernej Barbic, Alla Safonova, Jia-Yu Pan, Christos Faloutsos, Jessica K. Hodgins, Nancy S. Pollard: Segmenting Motion Capture Data into Distinct Behaviors. Graphics Interface 2004: 185-194
    [158] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319-2323,2000.
    [159] Spath, H., Cluster Dissection and Analysis: Theory, FORTRAN Programs, Examples, translated by J. Goldschmidt, Halsted Press, New York, 1985,226 pp.
    [160] Wolf W. Key frame selection by motion analysis[C] In Proceedings of the 1996 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Atlanta, 1996: 1228-1231.
    [161] Yueting Zhuang, Yong Rui, Thomas S.Huang. Adaptive key Frame extraction using unsupervised clustering[C]. hi Proceedings of IEEE International Conference on Image Processing(ICJP'98), Chicago, 1998: 886-870.
    [162] Yueting Zhuang, Yunhe Pan, Fei Wu. Web-based multimedia information analysis and retrieval [M]. Beijing: Tsinghua University Press, 2002: 83-86(in Chinese)
    [163] Liu F, Zhuang, Y, Wu F, and Pan, Y. 2003. 3d motion retrieval with motion index tree [J]. Computer Vision and Image Understanding, 2003, 92(2-3): 265-284.
    [164] Shen Junxing, Sun Shouqian, Pan Yunhe. Key-frame Extraction from motion capture data[J]. Journal of Computer-Aided Design & Computer Graphics, 2004, 16(5): 719-723(in Chinese)
    [165] I. S. Lim, D. Thalmann. Key-posture extraction out of human motion data by curve simplification[C], In 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Istanbul, 2001, 2: 1167-1169.
    [166] Ke-Sen Huang, Chun-Fa Chang, Yu-Yao Hsu, Shi-Nine Yang. Key Probe: a technique for animation keyframe extraction. The Visual Computer. 21(8-10), 532-541 (2005)
    [167] Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins and Nancy S. Pollard. Interactive control of avatars animated with human motion data[C]. In proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2002), San Antonio, 2002: 491-500.
    [168] Chih-Yi Chui, Shih-Pin Chao, Ming-Yang Wu, Shi-Nine Yang, Hsin-Chih Lin. Content-based Retrieval for Human Motion Data[J]. Journal of Visual Communication and Image Representation, 2004,15(3): 446-466.
    [169] Meinard Mueller, Tido Roeder, Michael Clausen. Efficient Content-Based Retrieval of Motion Capture Data [J]. ACM Transactions on Graphics, 2005, 24(3): 677-685.
    [170] P. L. Rosin. Techniques for assessing polygonal approximations of curve. IEEE Transactions on Pattern Recognition and Machine Intelligence, 1997,19(6): 659 -666.
    [171] Fritsch, F. N. and R. E. Carlson, "Monotone Piecewise Cubic Interpolation," SIAM J. Numerical Analysis, Vol. 17,1980, pp.238-246.
    [172] Demuth, B., Roder, T., Muller, M., Eberhardt, B. (2006): An Information Retrieval System for Motion Capture Data. In Proceedings of the 28th European Conference on Information Retrieval (ECIR 2006), M. Lalmas et al., eds., LNCS 3936, Berlin/Heidelberg, Springer, pp. 373-384.
    [173] Yasuhiko Sakamoto, Shigeru Kuriyama, and Toyohisa Kaneko, "Motion Map: Image-based Retrieval and Segmentation of Motion Data", Proc. ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2004, pp.259-266, 2004
    [174] Guodong Liu, Jingdan Zhang, Wei Wang and Leonard McMillan. A system for analyzing and indexing human motion databases (demo). Proc. ACM SIGMOD International Conference on Management of Data (SIGMOD), 924-926, 2005.
    [175] Eamonn Keogh, Themistoklis Palpanas, Victor B. Zordan, Dimitros Gunopulos and Marc Cardie. Indexing Large Human-motion Databases. In the Proceedings of the 30th VLDB Conference, pp.780-791, 2004.
    [176] Hongjiang Zhang, Jianhua Wu, Di Zhong and Stephen W. Smoliar. An integrated system for content-based video retrieval and browsing. Pattern Recognition, vol. 30, no. 4, 1997, pp. 642-658.
    [177] Li Zhao, Wei Qi, Stan Z. Li, Shiqiang Yang, and Hongjiang Zhang. Key-frame extraction and shot retrieval using nearest feature line (NFL), in Proceedings of International Workshop on Multimedia Information Retrieval, in conjunction with ACM Multimedia Conference 2000, Los Angles, 2000, pp.217-220.
    [178] M.E. Brand, A. Hertzmann.: Style Machines. In Proceeding of ACM SIGGRAPH 2000, ACM Press, New York, NY, USA, July 2000, pp. 183-192.
    [179] Keith Grochow, Steven L. Martin, Aaron Hertzmann, Zoran Popovic: Style-based Inverse Kinematics. In Proceedings of SIGGRAPH 2004, 2004, pp.522-531.
    [180] Lorenzo Torresani, Peggy Hackney, and Christoph Bregler, Learning to synthesize motion styles, Snowbird Learning Workshop, 2006
    [181] Eugene Hsu, Kari Pulli and Jovan Popovi, Style translation for human motion, ACM Transactions on Graphics, 24(3): 1082—1089,2005
    [182] I. T. Jolliffe. Principal Component Analysis. Springer series in statistics. Springer-Verlag, 1986.
    [183] V. Blanz, T. Vetter.: A morphable model for the synthesis of 3d faces. In A. Rockwood, editor, In Proceedings of the Conference on Computer Graphics (SIGGRAPH99) , Aug. 1999, pages 187-194.
    [184] I. S. Lim, D. Thalmann.: Construction of animation models out of captured data. In Proc. IEEE Conf. Multimedia and Expo, Aug. 2002, vol 1, pp.829-832.
    [185] Lucas Kovar, John Schreiner, Michael Gleicher: Footskate Cleanup for Motion Capture Editing. In Proceedings of the 2002 ACM Symposium on Computer Animation (SCA), July 2002, pp.97-104

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700