基于运动库的三维角色动画生成方法研究与实现
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
如何生成逼真的三维角色动画一直是计算机图形学、虚拟现实等领域的研究热点与难点。运动捕获作为三维角色动画生成的一项主流技术,具有数据逼真度高细节丰富等特点,目前已成为计算机游戏、电影特效、训练仿真系统,医学辅助分析等诸多领域的标准方法。然而该方法从原理上是三维运动的简单录制和重放,无法提供对运动数据更多的交互控制。这使得该方法的数据可重用性低,难以适应交互性强的应用。
    本文针对利用已有的运动捕获数据库可控地生成新运动的问题,即基于运动库的运动生成进行了深入研究。主要成果如下:
    第一,提出并实现了基于三维运动库和一阶概率转移模型的运动控制方法
    如何描述运动数据内在的时空结构,是基于运动库运动生成方法的关键所在,决定了运动数据能否简洁有效的根据外界约束生成新运动。
    本文提出并实现了基于三维运动库和一阶概率转移模型的运动控制方法。该方法把运动数据看作一阶马尔可夫过程,即当前运动状态只和前一运动状态有关。在运动库预处理中,通过度量运动片段间的姿态速度相似性,估计运动片段间转移概率,从而建立了运动库一阶概率转移模型。在检索运动库时,我们选择和前一片段转移概率最大的片段作为当前片段,以此类推,最终得到概率最大的运动序列。我们以广播体操为例,结合相应的视频为交互界面验证了本文方法的有效性。实验表明,该方法能够有效地反映运动数据间的时空关系,提高运动库检索的准确性。
    第二,研究并实现了基于多尺度框架的运动片段平滑连接算法,保证了运动生成效果的逼真自然。
    从运动库检索出来的运动片段通常并不连续,需要进行片段平滑连接才能保证最终的运动生成效果。数据连接方法很多,一般的插值方法即可胜任。然而,由于运动捕获数据不同于基于关键帧或物理仿真方法生成的运动数据,其丰富的运动细节(即高频分量)使得简单的插值方法难以稳定估计运动片段在连接处的速度,最终难以得到平滑连接的效果。如何既保证片段间的平滑连接又保证原有的运动细节不被破坏,是运动捕获数据连接的难点。
    本文采用多尺度运动数据分析方法,首先将运动数据分解成一个低频基运动信号和不同层次的高频细节分量,然后分别连接基运动信号和各层细节分量,最后将连接后细节分量逐层添加到新的基信号上,最终恢复得到连接后的运动。大量实验表明,通过对基信号和逐层细节分量的分别处理,平滑连接结果在连接处达到了c1连续性并忠实地保持了原有的运动细节。
The generation of realistic 3D character animation is a popular and difficult issue in theareas of computer graphics and virtual reality. Motion capture, as an important technology in3D motion generation, can obtain highly-fidelity 3D recordings of the motion of a liveperformer, and thus has become the standard motion generation solution in applications likecomputer game, movie special effects, sports simulation systems, and visualization aids formedical analysis. However, motion capture does not offer an animator free interactive control,but only allow one to play back what has been recorded. This drawback makes the existingmotion data difficult to be reused, especially in interactive applications.
    This thesis focuses on the problem of how to generate new motion with existing motioncapture database under users' guidance. The contributions of this thesis are as follows:
    1. A motion control method based on 3D motion database and first-order probabilitytransition model is proposed and implemented.
    How to describe the spatio-temporal structure underlying the motion database is crucial tothe problem of motion generation based on motion database. In this thesis, we model motion asa first-order Markov process, that is, the current motion status is totally dependant on theprevious status. In the preprocess phase of motion database, we first measure the similaritybetween motion clips, according to which we then estimate the transition probabilities betweenthem, and finally build the first-order probability transition model of motion database. In theon-line motion control phase, we search the preprocessed motion database, and choose the clipthat has the largest transition probability with the previous one to be the current clip. Byrepeating the search-and-choose process, a most probable motion sequence is found out. Weverify the effectiveness of our method by adopting sport exercise video and a video interface.The result shows that our method can describe the spatio-temporal structure underlying themotion data effectively and promote the correctness in motion database searching.
    2. To guarantee the reality of generated motion, a motion stitching method based onmultiresolution analysis framework is studied and implemented.
    Generally speaking, motion clips retrieved from motion database are not continuous witheach other, so a motion stitching method needs to be adopted to guarantee the smoothness andreality of generated motion sequences. There are many interpolation methods could be used to
    concatenate motion clips. However, obtaining a robust estimate of velocity from motioncaptured data is difficult because these data usually oscillate to include fine details that maydistinguish the motion capture data from those data generated by keyframing and physicssimulation. The difficulty of motion capture data concatenation lies in two issues: how toguarantee seamless concatenation between clips and how to maintain the fine details of originalmotion. A seamless motion stitch method based on multiresolution analysis framework isadopted in this thesis. We first decompose motion data into a base signal and levels of detailsignal, then interpolate base signal and detail signals respectively, and finally compose thestitched base signal and detail signals into a highly detailed motion signal, that is the stitchedmotion sequence. Experiment shows that the stitched motion sequence has c1 continuousnessand maintains the original motion details faithfully.3. A motion generation platform based on motion database with a video interface isimplemented.Due to the success of motion capture method, highly-fidelity motion data has rapidlybecome popular and commercial available. If we could utilize existing motion capture databaseand generate new motion under the users' guidance, we will be able to reduce the cost ofanimation production greatly and provide a brand new technology solution for interactivevideo game. In this thesis, we implement a motion generation platform based on motiondatabase with a video interface. In the platform, we first reconstruct the 3D poses from videokey frames under the user's guidance and calculate joints angle curves, then search thedatabase to find out the most probable motion sequence that matches the reconstructed jointsangle curves, and finally seamless stitch the motion clips with multiresolution analysis method.We use sport exercise video and relevant motion database to test the effectiveness of theplatform. Experiment shows that the generated motion is visually compatible with videocontents.
引文
[1] P. Faloutsos, M. van de Panne, and D. Terzopoulos. Composable controllers for physicsbased character animation. In Proceedings of ACM SIGGRAPH 2001, Annual Conference Series, pages 251–260. ACM SIGGRAPH, July 2001.
    [2] P. Faloutsos, M. van de Panne, and D. Terzopoulos. The virtual stuntman: dynamic characters with a repetoire of autonomous motor skills. Computers and Graphics, 25(6):933–953, 2001.
    [3] A. Fang and N. Pollard. Efficient synthesis of physically valid human motion. ACM Transactions on Graphics, 22(3):417–426, 2003.
    [4] J. Hodgins and N. Pollard. Adapting simulated behaviors for new characters. In Proceedings of ACMSIGGRAPH 97, Annual Conference Series, pages 153–162. ACMSIGGRAPH, August 1997.
    [5] D. Sturman. A brief history of motion capture for computer character animation. In ”Character Motion Systems”, Notes for SIGGRAPH '94, Course 9, 1994.
    [6] 王兆其,虚拟人生成研究综述,《中国科学院研究生院学报》,2000, 17(2):89-98.
    [7] D Thalmann, J Shen, E Chauvineau. Fast Realistic Human Body Deformations for Animation and VR Applications, in: Proc. Computer Graphics International 96, IEEE Computer Society Press, June 1996, pp.166-174.
    [8] Lucas Kovar, Michael Gleicher, John Schreiner. Footskate Clearup for Motion Capture Editing. ACM SIGGRARPH Symposium on Computer Animation 2002, pp.97-104.
    [9] J. Harrison, R. Rensink, and M. van de Panne. Obscuring length changes during animated motion. ACM Transactions on Graphics, 23(3):569–573, 2004.
    [10] J. Hodgins,W.Wooten, D. Brogan, and J. O'Brien. Animating human athletics. In Proceedings of ACM SIGGRAPH 95, Annual Conference Series, pages 71–78. ACM SIGGRAPH, August 1995.
    [11] W.Wooten and J. Hodgins. Dynamic simulation of human diving. In Proceedings of Graphics Interface (GI'95), pages 1–9, May 1995.
    [12] W. Wooten and J. Hodgins. Simulating leaping, tumbling, landing, and balancing humans. In IEEE International Conference on Robotics and Animation, volume 1, pages 656–662, 2000.
    [13] J. Laszlo, M. van de Panne, and E. Fiume. Limit cycle control and its application to the animation of balancing and walking. In Proceedings of ACM SIGGRAPH 96, Annual Conference Series, pages 155–162. ACM SIGGRAPH, August 1996.
    [14] C. K. Liu and Z. Popovi′c. Synthesis of complex dynamic character motion from simple animations. ACM Transactions on Graphics, 21(3):408–416, 2002.
    [15] A. Safonova, J. Hodgins, and N. Pollard. Synthesizing physically realistic motion in lowdimensional, behavior-specific spaces. ACM Transactions on Graphics, 23(3), 2004.
    [16] A. Bruderlin and L. Williams. Motion signal processing. In Proceedings of ACM SIGGRAPH 95, Annual Conference Series, pages 97–104. ACM SIGGRAPH, August 1995.
    [17] A. Witkin and Z. Popovi′c. Motion warping. In Proceedings of ACM SIGGRAPH 95, Annual Conference Series, pages 105–108. ACM SIGGRAPH, August 1995.
    [18] M. Gleicher. Motion path editing. In Proceedings 2001 ACM Symposium on Interactive 3D Graphics. ACM, March 2001.
    [19]M. Gleicher. Motion editing with spacetime constraints. In Michael Cohen and David Zeltzer, editors, Proceedings 1997 Symposium on Interactive 3D Graphics, pages 139–148. ACM, apr 1997.
    [20] M. Gleicher. Retargeting motion to new characters. In Proceedings 0f ACM SIGGRAPH 98, Annual Conference Series, pages 33–42. ACM SIGGRAPH, July 1998.
    [21] J. Lee and S. Y. Shin. A hierarchical approach to interactive motion editing for human-like figures. In Proceedings of ACM SIGGRAPH 99, Annual Conference Series, pages 39–48. ACM SIGGRAPH, August 1999.
    [22] H. J. Shin, J. Lee, M. Gleicher, and S. Y. Shin. Computer puppetry: an importance-based approach. ACM Transactions on Graphics, 20(2):67–94, April 2001.
    [23] M. Unuma, K. Anjyo, and T. Tekeuchi. Fourier principles for emotion-based human figure animation. In Proceedings of ACM SIGGRAPH 95, Annual Conference Series, pages 91–96. ACM SIGGRAPH, 1995.
    [24] K. Amaya, A. Bruderlin, and T. Calvert. Emotion from motion. In Proceedings of Graphics Interface (GI'96), pages 222–229, May 1996.
    [25] D. Chi, M. Costa, L. Zhao, and N. Badler. The emote model for effort and shape. In Proceedings of ACM SIGGRAPH 2000, Annual Conference Series, pages 173–182. ACM SIGGRAPH, August 2000.
    [26] H. Ko and N. Badler. Animating human locomotion with inverse dynamics. IEEE Computer Graphics and Application, 16(2):50–59, 1996.
    [27] S. Tak, O.-Y. Song, and H.-S. Ko. Spacetime sweeping: an interactive dynamic constraints solver. In Computer Animation 2002, pages 261–270, June 2002.
    [28] Y. Abe, C. K. Liu, and Z. Popovi′c. Momentum-based parameterization of dynamic character motion. Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2004, August 2004.
    [29] V. Zordan and J. Hodgins. Motion capture-driven simulations that hit and react. In Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2002, July 2002.
    [30] J. Barbic, A. Safonova, J.-Y. Pan, C. Faloutsos, J. Hodgins, and N. Pollard. Segmenting motion capture data into distinct behaviors. In Proceedings of Graphics Interface (GI 2004), 1994.
    [31] R. Bindiganavale and N. Badler. Motion abstraction and mapping with spatial constraints. In Modelling and Motion Capture Techniques for Virtual Environments, CAPTECH'98, pages 70–82, November 1998.
    [32] B. Bodenheimer, C. Rose, S. Rosenthal, and J. Pella. The process of motion capture: dealing with the data. In Computer Animation and Simulation '97, pages 3–18, 1997.
    [33] C. B¨ohm, S. Berchtold, and D. A. Keim. Searching in high-dimensional spaces: index structures for improving the performance of multimedia databases. ACM Computing Surveys, 33(3):322–373, 2001.
    [34] R. Bowden. Learning statistical models of human motion. In IEEE Workshop on Human Modelling, Analysis, and Synthesis, CVPR 2000. IEEE Computer Society, June 2000.
    [35] M. Brand and A. Hertzmann. Style machines. In Proceedings of ACM SIGGRAPH 2000, Annual Conference Series, pages 183–192. ACM SIGGRAPH, July 2000.
    [36] Yan Li, Tianshu Wang. Motion texture: a two-level statistical model for character motion synthesis. ACM Transactions on Graphics (TOG) ,Volume 21 ,Issue 3 ,July 2002, Special issue: Proceedings of ACM SIGGRAPH 2002.
    [37] Shmuel Moradoff, Dani Lischinski.Constrained synthesis of textural motion for articulated characters. The Visual computer,20:253-265,2004.
    [38] K. Pullen,C. Bregler, Motion Capture Assisted Animation: Texturing and Synthesis, Proc SIGGRAPH 2002,501-508.
    [39] M. Mizuguchi, J. Buchanan, and T. Calvert. Data driven motion transitions for interactive games. In Eurographics 2001 Short Presentations, September 2001.
    [40] K. Perlin. Real time responsive animation with personality. IEEE Transactions on Visualization and Computer Graphics, 1(1):5–15, March 1995.
    [41] C. Rose, M. Cohen, and B. Bodenheimer. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Application, 18(5):32–40, 1998.
    [42] Brand M, Hertzmann A (2000) Style machines. In: Proceedings of SIGGRAPH 2000, New Orleans, 23–28 July 2000, pp 183–192
    [43] R. Bowden. Learning statistical models of human motion. In IEEE Workshop on Human Modelling, Analysis, and Synthesis, CVPR 2000. IEEE Computer Society, June 2000.
    [44] A. Galata, N. Jognson, and D. Hogg. Learning variable-length markov models of behavior. Computer Vision and Image Understanding Journal, 81(3):398–413, 2001.
    [45] M. Brand and A. Hertzmann. Style machines. In Proceedings of ACM SIGGRAPH 2000, Annual Conference Series, pages 183–192. ACM SIGGRAPH, July 2000.
    [46] E. B. Dam, M. Koch, and M. Lillholm, Quaternions, interpolation and animation. Technical Report DIKU-TR-98/5, University of Copenhagen, 1998.
    [47] Ken Shoemake, Animating rotation with quaternion curves. Computer Graphics, 19(3): 245-254,1985
    [48] K.Lucas and M.Gleicher, F. Pighin (2002) Motion graphs. In:Proceedings of SIGGRAPH 2002, San Antonio, 21–25 July 2002, pp 473–482
    [49] K.Lucas and M.Gleicher. Automated Extraction and Parameterization of Motions in Large Data Sets. ACM Transactions on Graphics, 23, 3 (ACM SIGGRAPH '04), July 2004.
    [50] Michael Gleicher, Hyun Joon Shin, Lucas Kovar, Andrew Jepsen: Snap-together motion: assembling run-time animations. ACM Trans. Graph. 22(3): 702 (2003)
    [51] Arikan O, Forsyth DA . Interactive motion generation from examples. In: Proceedings of SIGGRAPH 2002, San Antonio, 21–25 July 2002, pp 483–490
    [52] J.Lee, J.Chai, P. S. A. REITSMA, J. K Hodgins, and N.S.Pollard, “Interactive control of avatars animated with human motion data”, Proc. SIGGRAPH 2002.
    [53] T.H. Kim, S. I. Park, and S. Y. Shin, Rhythmic-Motion Synthesis Based on Motion-Beat Analysis, ACM Transactions on Graphics, 22(3):392-401, 2003 (also presented at SIGGRAPH 2003)
    [54] P. Glardon, R. Boulic, D. Thalmann.PCA-based Walking Engine using Motion Capture Data In Proceeding of Computer Graphics International (CGI'04), June 16 -19, 2004 ,Crete, Greece
    [55] P. Glardon, R. Boulic, D. Thalmann .A Coherent Locomotion Engine Extrapolating Beyond Experimental Data, In Proceedings of Computer Animation and Social Agent (CASA), Geneva, Switzerland, July 2004
    [56] R. Urtasun, P. Glardon, R. Boulic, D. Thalmann, P. Fua. Style-Based Motion Synthesis. Computer Graphics Forum, vol. 23, no. 4, pp. 799-812, December 2004
    [57] Sidenbladh,H., Black,M.J.,And Sigal,L. 2002. Implict probabilistic models of human motion for synthesis and tracking. In European Conference on Computer Vision (ECCV)
    [58] P.J.Burt and E.H.Adelson. A multiresolution spline with application to image mosaics. ACM Transactions on Graphics. 2:215-236, 1983
    [59] Z.Liu, S.G.Gortler, and M.F.Cohen. Hierarchical spacetime control. Computer Graphics( Proceeding of SIGGRAPH 94), pages 35-42, July 1994
    [60] L. Ren, G. Shakhnarovich, J. K. Hodgins, H. Pfister, and P.l Viola. Learning silhouette features for control of human motion. In Proceedings of the SIGGRAPH 2004 Conference on Sketches & Applications. ACM Press, 2004.
    [61] J. Chai and J. K. Hodgins. Performance animation from low-dimensional control signals. ACM Transactions on Graphics (SIGGRAPH 2005), 2005,24(3)
    [62] X.Qiu, Z.Wang, S.Xia,Y.Sun, “Inferring 3D Body Pose from Uncalibrated Video”, In Proceeding of RoboCup2005.
    [63] 邱显杰, 基于视频的三维人体运动捕获方法研究,中国科学院研究生院博士学位论文,2005
    [64] J.Lee and S.Y.Shin. General construction of time-domain filters for orientation data. IEEE Transactions on Computer Graphics and Visualization.
    [65] J.Lee and S.Y. Shin, A Coordinate-Invariant Approach to Multiresolution Motion Analysis, Graphical Models (Formerly GMIP), volume 63, number 2, 87-105, 2001.
    [66] Taylor C. Reconstruction of articulated objects from point correspondences in a single uncalibrated Image, Computer Vision and Pattern Recognition, 2000.
    [67] Remondino F and Roditakis A. Human figure reconstruction and modeling from single image or monocular video sequence.4th International conference on 3D Digital Imaging and Modeling,2003,Canada.
    [68] Min Je Park, Min Gyu Choi and Sung Yong Shin. Human motion reconstruction from inter-frame feature correspondences of a single video stream using a motion library, Proceedings of the 2002 ACM SIGGRAPH/ Eurographics symposium on Computer animation
    [69] 罗忠祥. 视频流中的人体运动提取与运动生成. 浙江大学博士学位论文,2002
    [70] 吴键. 三维影片解析及腾空运动研究. 上海交通大学硕士学位论文,1987.
    [71] Girard, M. and A.A. Maciejewski. Computational modeling for the computer animation of legged figures. in Proceedings of ACM SIGGRAPH 85. 1985: ACM Press.
    [72] Zhao, J. and N.I. Badler, Inverse Kinematics Positioning Using Nonlinear Porgramming for Highly Articulated Figures. ACM Transactions on Graphics, 1994. 13(4): p. 313-336.
    [73] Hecker, R. and K. Perlin, Controlling 3D Objects by Sketching 2D Views. SPIE -Sensor Fusion V, 1992. 1828: p. 46-48.
    [74] Bregler, C., et al., Turning to the masters: motion capturing cartoons. ACM Transactions on Graphics, 2002. 21(3): p..399-407.
    [75] Gavrila, D.M., The visual analysis of human movement: a survey. Computer Vision and Image Understanding, 1999. 73(1): p. 82-98.
    [76] Moeslund, T.B. and E. Granum, A survey of computer vision-based human motion capture. Computer Vision and Image Understanding, 2001. 81(3): p. 231-68.
    [77] Gleicher, M. and N. Ferrier. Evaluating Video-Based Motion Capture. in Computer Animation 2002. 2002.
    [78] 文高进等,基于光学运动捕获数据的骨骼运动几何重构技术,技术文档,中科院计算所虚拟人生成课题组
    [79] Pullen.K, Motion Caputer Assisted Animation: Texture and Synthesis, Ph.D. Dissertation, 2002

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700