人脸运动捕捉数据处理及表情动画重构研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
人脸运动捕捉是运动捕捉(Motion capture, MoCap)领域的一个新兴分支,是基于运动捕捉的数据采集方式捕捉人脸表情运动、处理和分析运动捕捉数据以及对人脸表情进行动画仿真的一门学科。该学科是人体工程学、计算机图形学、图像处理、数据处理等多个学科相互交叉与渗透而产生的新兴研究领域,是当前计算机科学的研究热点。人脸表情运动捕捉的研究不仅具有一定的理论研究意义,还具有广泛的实际应用价值。到目前为止,人脸表情运动捕捉已经广泛应用于现代影视动画、游戏制作、医学分析、虚拟现实等多个领域。本文针对人脸运动捕捉领域的若干关键问题进行了深入的研究,取得了一定的理论成果,开发了一个原型系统。论文工作主要包括以下几部分内容:
     1.基于空间几何柔性的人脸模板自动匹配方法:针对由三维坐标点集构成的具有相同排布方式、局部非刚性变形以及分布误差的不同人脸表情模板,运用启发式方法将模板空间进行归一化,构建了人脸拓扑结构,在模板匹配过程中,计算局部标记点(Marker)的运动对匹配进行矫正,使用临时反馈方法(Temporary feedback method, TFM)提高标记点的匹配可靠性,实现了由局部到整体的人脸模板自动匹配。实验证明该方法具有一定的鲁棒性,能对具备较大表情差异的模板进行快速准确的自动匹配,解决了在序列首帧模板生成过程中需要手动干预的实际问题。
     2.基于人脸运动捕捉数据的在线(Online)数据处理与轨迹光顺方法:针对运动捕捉数据中存在的数据缺失和噪声,提出了动态时空耦合的在线处理方法。分析了噪声繁殖问题,使用基于自适应Kalman滤波的噪声繁殖处理模块(Noise propagation solution module, NPS)对其进行抑制。提出了基于3D人脸拓扑结构的精练非刚性运动解析机制(Sophisticated non-rigid motion interpreter, SNRMI),结合动态追踪方法,能有效地追踪非缺失数据以及修复长时间缺失的标记点数据。最后,使用语义检测算法对开放结构(嘴、眼等)的错误追踪进行检测和修正。另外,针对轨迹光顺问题,提出了一种在线曲线的构造算法。实验证明,该方法能够自动处理具有噪声和长时间相邻多点缺失的运动捕捉数据,以及对运动轨迹进行在线光顺。
     3.基于运动捕捉数据的人脸动画仿真方法:基于对人脸不同功能分区进行模型驱动的思想,构建了基于径向基函数方法(Radial Basis Functions, RBF)的交叉映射,实现了不同人脸模型的运动数据生成。在模型驱动研究中,添加虚拟标记点并求解其运动,作为模型驱动数据,采用RBF方法实现了基于不同个性人脸模型的真实感动画生成。设计了预计算算法提高了实时仿真的效率,减少了计算消耗。实验证明,该方法能够将同一演员的运动捕捉数据应用于不同的个性人脸模型,生成逼真的人脸表情动画。
     4.基于人脸运动捕捉的数据处理及动画重构原型系统开发:系统定位为拥有自主知识产权的交互性软件系统,整合了论文的理论研究成果,构建了统一的底层数据结构,提供了强大的交互接口,设计了友好的操作界面。为使运动捕捉数据处理及表情动画仿真能够直观高效,系统将模板构建、数据处理及光顺和模型驱动三个主要过程模块无缝整合在一起。实验证明,该系统能够有效验证本文方法,并且全程交互处理运动捕捉数据及重构人脸表情动画,处理效率高,动画仿真效果真实感强。
     综上所述,本文针对人脸运动捕捉的数据处理及动画重构过程中所面临的几个主要问题做出了深入的理论研究,结合提出的算法理论,设计和开发了人脸表情运动捕捉数据处理及表情动画重构原型系统,在大量实验测试的基础上,验证了各个算法的有效性、鲁棒性、仿真程度及执行效率。
Facial motion capture is an emerging branch of the motion capture (MoCap) field, which is a science of capturing facial motion by MoCap data acquisition method, processing and analyzing MoCap data, then generating facial animation from the processed data. Facial MoCap is a multi-disciplinary intersection and penetration of human body engineering, computer graphics, image processing, data processing and so on, and is the current research focus in computer science. Facial MoCap is an important issue with not only theoretical significance but also application value. The facial MoCap has been widely used in modern film and television animation, game production, medical analysis, virtual reality and other fields. This thesis systematically studied several key issues in facial MoCap, and made some academic achievements, at last, a prototype system for facial MoCap data processing and facial expression reconstruction was developed. The main work of this thesis includes the following aspects:
     1. Automatic facial template matching based on spatial geometric flexibility:According to two facial templates of identical cardinality with global similarity but local non-rigid deformations and distribution errors, the proposed method used heuristic methods to normalize the templates, the motions of local markers to correct local match, and the temporary feedback method (TFM) to improve reliability of the match, then achieved an automatic process to match the templates from local to global. The experiments proved that the method is robust, and it can fast and effectively match the different templates under different facial expressions in a deterministic and automatic process, which resolved the requirements of troublesome manual interventions during template generation.
     2. Online data processing and trajectory smoothing for facial MoCap:According to the data missing and noise of raw MoCap data, an online processing method based on dynamic spatial-temporal information was proposed. First, an analysis of noise propagation problem is proposed, and a noise propagation solution module (NPS) based on adaptive Kalman filter is used to suppress the noise propagation. Second, a 3D facial topology-based sophisticated non-rigid motion interpreter (SNRMI) is put forward, together with a dynamic tracking method, which could not only track the valid non-missing data effectively but recover several adjacent markers under long time missing. Third, to rule out wrong tracks generated from the markers in open structures (such as mouth, eyes), a semantic correction method is proposed. Lastly, an online curve modeling method is proposed to construct curves online to smooth marker trajectories. Experiments show that the method can automatically process raw data with noise and long time missing problems, and can simultaneously smooth trajectories.
     3. Simulation of facial animation based on facial MoCap data:Based on the idea of driving facial model from different functional regions, a cross-mapping algorithm based on radial basis functions (RBF) method is constructed for the generation of motion data for different facial models. During model driving, virtual markers are added and their motion data are computed, then, RBF is used to drive the different personalized facial models to generate realistic facial animation. A pre-computing algorithm is proposed to reduce computational cost during real-time simulation. The experiments proved that the method can not only map the MoCap data of one subject to different personalized face but generate realistic facial animations.
     4. Development for the prototype system of facial MoCap data processing and facial animation reconstruction:The system is designed and positioned to a interactive graphics system with software intellectual property rights, the proposed algorithms of the thesis are integrated, a unified underlying data structures is built, a friendly and powerful interactive interface is provided. To make the MoCap data processing and facial expression simulation intuitive and efficient, the system seamlessly integrates the three main modules:template construction, data processing and smoothing, and model-driven process. Practice has shown that the system can effectively prove the proposed methods, and process MoCap data and reconstruct facial expression animation by full interaction, moreover, the processing is efficient and the simulated animation is realistic.
     To sum up, several major issues in facial MoCap data processing and facial animation. reconstruction have been studied, and under the support of the proposed algorithms, prototype system for facial MoCap data processing and facial expression animation reconstruction has been designed and developed. With a great deal of experiments, the effectiveness, robustness, credibility level and efficiency of each algorithm were verified and proved.
引文
[1]张成龙.浅谈中国动漫产业[EB/OL]. (2007,09,21)[2010,03,24]. http://gov.finance.sina.com.cn/chanquan/2007-09-21/45484.html.
    [2]中国市场情报中心.2008年中国动漫市场研究报告[EB/OL]. (2008,06,26)[2010,03,24]. http://market.ccidnet.com/report/content/3071/200806/96232.html.
    [3]ISI Web of Knowledge[OL]. [2010-03-24]. http://apps.isiknowledge.com.
    [4]David J S. A Brief History of MoCap for Computer Character Animation[C]. In:SIGGRAPH94, Course 9, Animation,1994.
    [5]Frey W, Zyda M, Mcghee R, et al. Off-the-Shelf, Real-Time, Human Body MoCap for Synthetic Environments[R]. Tech. Rep. NPSCS-96-003, Naval Postgraduate School, Monterey, California,1996.
    [6]金刚,李德华,周学泳.表演动画中的运动捕捉技术[J].中国图象图形学报,2000,5(3):264-267.
    [7]King B A, Paulson LD. MoCap moves into new realms[J]. Computer,2007,40(9):13-16.
    [8]Richards A. The measurement of human motion:A comparison of commercially available systems[J]. Human Movement Science,1999,18:589-602.
    [9]Jobbagy A, Furnee E, Harcos P, et al. Early detection of Parkinson's disease through automatic movement evaluation[J]. Engineering in Medicine and Biology Magazine.1998, 17(2):81-88.
    [10]Gleicher M. Animation from Observation:MoCap and Motion Editing[J]. Computer Graphics, 1999,33(4):51-54.
    [11]Sturman D. A Brief History of MoCap[C]. SIGGRAPH 94, Course 9 Notes.1994, ACM.
    [12]运动捕捉系统行业应用报告[EB/OL]. (2006,07,06)[2010,03,24]. http://www.cgtimes.com.cn/jszx/zyxt/motioncap/19116.shtml.
    [13]Vicon MX光学运动捕捉系统主页[OL].[2010-03-24].http://www.vicon.com.
    [14]MotionAnalysis公司主页[OL].[2010-03-24].http://www.motionanalysis.com.
    [15]CODA运动捕捉公司主页[0L]. [2010-03-24]. http://www.codamotion.com.
    [16]Qualisys运动捕捉系统首页[OL].[2010-03-24].http://www.qualisys.com.
    [17]大连东锐软件有限公司主页[OL].[2010-03-24].http://www.dorealsoft.com.
    [18]北京迪生动漫技术有限公司主页[OL].[2010-03-24].http://www.disontech.com.cn.
    [19]Gleicher M, Ferrier N. Evaluating Video-Based MoCap[C]. Conference on Computer Animation, Geneva, Switzerland,2002:19-21.
    [20]Molet T, Boulic R, Thalmann D. A real time anatomical converter for human MoCap[C]. Proceedings of the Eurographics workshop on Computer animation and simulation, 1996:79-94.
    [21]Aristidou A, Cameron J, Lasenby J. Predicting Missing Markers to Drive Real-Time Centre of Rotation Estimation[C]. Lecture Notes In Computer Science,2008,5098:238-247.
    [22]Liu G, Mcmillan L. Estimation of missing markers in human MoCap[J]. The Visual Computer, 2006,22(9-11):721-728.
    [23]Moeslund T B, Granum E. A Survey of Computer Vision-Based Human MoCap[J]. Computer Vision and Image Understanding,2001,81:231-268.
    [24]Moeslund T B, Hilton A, Kruger V. A survey of advances in vision-based human MoCap and analysis[J]. Computer Vision and Image Understanding.2006,104,90-126.
    [25]Parke F I. Computer generated animation of faces[C]. Proceedings of ACM National Conference,1972,1:451-457.
    [26]Noh J, Neumann U. A survey of facial modeling and animation techniques[R]. University of Southern California Technical Report, California:University of Southern Californis, 1998.
    [27]肖伯祥.运动捕捉数据处理、检索与重构方法研究[D].大连:大连理工大学,2009.
    [28]Tolles, T W. Practical Considerations for Facial MoCap[G]. In:Data-Driven 3D Facial Animation, Springer London,2007:277-289.
    [29]Jiang J, Alwan A, Keating P A, et al. On the relationship between face movements, tongue movements, and speech acoustics[J]. EURASIP J. Appl. Signal Processing,2002, 11:1174-1188.
    [30]Deng Z, Chiang P, Fox P, et al. Animating Blendshape Faces by Cross-Mapping MoCap Data[C]. Proc. of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games,2006:43-48.
    [31]Somasundaram A, Parent R. A Facial Animation System for Expressive Audio-Visual Speech[R]. OSU-CISRC-4/06-TR46, Department of Computer Science and Engineering, The Ohio State University:Columbus, OH, April 2006.
    [32]Havaldar P. Performance-driven facial animation[C]. In Proc. SIGGRAPH 2006 (course notes),2006:23-42.
    [33]Lin I C, Ouhyoung M. Mirror MoCap:Automatic and efficient capture of dense 3D facial motion parameters from video[J]. The Visual Computer,2005,21(6):355-372.
    [34]Edge J D, Sanchez M A, Maddock S. Animating speech from motion fragments[R]. Technical Report CS-04-02, Department of Computer Science, University of Sheffield.
    [35]Sifakis E, Neverov I, Fedkiw R. Automatic determination of facial muscle activations from sparse MoCap marker data[C]. ACM SIGGRAPH 2005,2005:417-425.
    [36]Curio C, Breidt M, Vuong Q, et al. Semantic 3D Motion Retargeting for Facial Animation[C]. ACM International Conference Proceeding Series,2006,153:77-84.
    [37]Bickel B, Botsch M, Angst R, et al. Multi-Scale Capture of Facial Geometry and Motion[J]. ACM Transactions on Graphics,2007.
    [38]Ju E, Lee J. Expressive Facial Gestures From MoCap Data[C]. EUROGRAPHICS 2008, 27(2):381-388.
    [39]Zhuang Y T, Pan Y H, Xiao J. A modern approach to intelligent animation:theory and practice[M]. Springer Berlin Heidelberg.2008.
    [40]Davis J. Mixed scale motion recovery[D]. Stanford:Stanford University,2002.
    [41]Mishima K, Yamada T, Ohura A, et al. Production of a range image for facial motion analysis:a method for analyzing lip motion[J]. Comput Med Imaging Graph,2006, 30:53-59.
    [42]Wachtman G S, Cohn J F, VanSwearingen JM, et al. Automated tracking of facial features in patients with facial neuromuscular dysfunction [J]. Plast Reconstr Surg,2001,107: 1124-1133.
    [43]Mishima K, Sugahara T. Analysis methods for facial motion[J]. Japanese Dental Ccience Review.2009,45(1):4-13.
    [44]Chai J, Mellon C, Hodgins J. Vison-based control of 3D facial animation[C]. Eurographics/SIGGRAPH Symposium on Computer Animation,2003:192-206.
    [45]Papic V, Zanchi V, Cecic M. Motion analysis system for identification of 3D human locomotion kinematics data and accuracy testing[J]. Simulation Modelling Practice and Theory,2004,12(2):159-170.
    [46]黄波士,陈福民,张金剑.一种改进算法的光学运动捕捉系统[J].同济大学学报(自然科学版).2005,33(10):1372-1376.
    [47]高申玉,刘金刚.人体运动实时捕捉仪器设计与应用[J].研制与开发.2005,2:33-35.
    [48]Chai J, Hodgins J. Performance animation from low-dimensional control signals[J]. ACM Transactions on Graphics,2005,24(3):686-696.
    [49]Guenter B, Grimm C, Wood D, et al. Making faces[C]. International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2005 Courses, Los Angeles, California,2005,31.
    [50]Lin I C, Yeh J S, Ouhyoung M. Extracting 3D facial animation parameters from multiview video clips[J]. IEEE Computer Graphics and Applications,2002,22(6):72-80.
    [51]Li B, Holstein H. Using k-d Trees for Robust 3D Point Pattern Matching[C]. Proceedings of Fourth International Conference on 3D Digital Imaging and Modeling,2003:95-102.
    [52]Park S I, Hodgins J K. Capturing and Animating Skin Deformation in Human Motion[J]. ACM Transactions on Graphics,2006,25(3):881-889.
    [53]Besl P J, McKay D. A Method for Registration of 3-D Shapes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1992,14(2):239-256.
    [54]Rohr K, Stiehl H S, Sprengel R, et al. Point-Based Elastic Registration of Medical Image Data Using Approximating Thin-Plate Splines[C]. Proceedings of the 4th International Conference on Visualization in Biomedical Computing,1996:297-306.
    [55]Wahba G. Spline models for observational data[C]. SIAM, Philadelphia, PA,1990.
    [56]Chui H, Rambo J, Duncan J, et al. Registration of cortical anatomical structures via robust 3D point matching[C]. Information Processing in Medical Imaging (IPMI), Springer, Berlin,1999:168-181.
    [57]Gold S, Rangarajan A, Lu C P, et al. New algorithms for 2-D and 3-D point matching: pose estimation and correspondence[J]. Pattern Recognition,1998,31(8):1019-1031.
    [58]Chui H, Rangarajan A. A new algorithm for non-rigid point matching[J]. Computer Vision and Pattern Recognition.2000,2:44-51.
    [59]Chui H, Rangarajan A. A new point matching algorithm for non-rigid registration[J]. Computer Vision and Image Understanding,2003,89:114-141.
    [60]Feng J, Ip H H S, Lai L Y, et al. Robust point correspondence matching and similarity measuring for 3D models by relative angle-context distributions [J]. Image and Vision Computing,2008,26:761-775.
    [61]Zheng Y, Doermann D. Robust Point Matching for Non-Rigid Shapes:A Relaxation Labeling Based Approach[J]. IEEE Trans. Pattern Anal. Mach. Intell,2004,28(4):643-649.
    [62]Zheng Y, Doermann D. Robust Point Matching for Nonrigid Shapes by Preserving Local Neighborhood Structures[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(4):643-649.
    [63]Autodesk公司Motion Builder软件首页[OL].[2010-03-24].http://usa.autodesk.com/adsk/servlet/index?id=6837710&siteID=123112.
    [64]Autodesk公司3DS Max软件首页[OL].[2010-03-24].http://usa.autodesk.com/adsk/servlet/index?id=5659292&siteID=123112.
    [65]Autodesk公司Maya软件首页[OL].[2010-03-24].http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=7635018.
    [66]Wrobel-Dautcourt B, Berger M O, Potard B, et al. A low-cost stereovision based system for acquisition of visible articulatory data[C]. Proceedings of the 5th Conference on Auditory-Visual Speech Processing, Vancouver Island, BC, Canada,2005.
    [67]Ekman P, Friesen W V. Facial action coding system[M]. Palo Alto, CA:Consulting Psychologists Press,1978.
    [68]Torresani L, Hertzmann A, Bregler C. Non-rigid structure-from-motion:Estimating shape and motion with hierarchical priors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(5):878-892.
    [69]Akhter I, Sheikh Y, Khan S, et al. Nonrigid Structure from Motion in Trajectory Space [M]. Neural Information Processing Systems,2008.
    [70]Brand M. Morphable 3D models from video[C]. Proc. Int.Conf. Computer Vision and Pattern Recognition,2001,2:Ⅱ456-Ⅱ463.
    [71]Bregler C, Hertzmann A, Biermann H. Recovering non-rigid 3D shape from image streams[C]. Proc. Int. Conf. Computer Vision and Pattern Recognition,2000,2:690-696.
    [72]Llado X, Del Bue A, Agapito L. Non-rigid 3D Factorization for Projective Reconstruction[C]. In British Machine Vision Conference, Oxford, UK, September,2005.
    [73]Torresani L, Yang D, Alexander G, et al. Tracking and modeling non-rigid objects with rank constraints[C]. Proc. Int. Conf. Computer Vision and Pattern Recognition,2001, 1:1493-1500.
    [74]Xiao J, Chai J, Kanade T. A closed-form solution to non-rigid shape and motion recovery[J]. International Journal of Computer Vision,2006,67(2):233-246.
    [75]Chuang E, Bregler C. Performance driven facial animation using blendshape interpolation[R]. Computer Science Technical Report, Stanford Univ,2002.
    [76]Essa I A, Darrell T, Pentland A P. Tracking Facial Motion[C]. Proceedings of the 1994 IEEE Workshop on Motion of Nonrigid and Articulated Objects,1994:36-42.
    [77]Essa I A, Pentland A P. Coding, Analysis, Interpretation, and Recognition of Facial Expressions [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997, 19(7):757-763.
    [78]Li H, Roivainen P, Forchheimer R.3-D motion estimation in model-based facial image coding[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1993, 15(6):545-555.
    [79]Bohm W, Far in G, Kahmann J. A survey of curve and surface methods in CAGD[J]. Computer Aided Geometric Design,1984,1(1):1-60.
    [80]Nam S H, Yang M Y. A study on a generalized parametric interpolator with real-time jerk-limited acceleration[J]. Computer-Aided Design,2004,36(1):27-36.
    [81]Farouki R T, Tsai Y F, Yuan G F. Contour machining of free-form surfaces with real-time PH curve CNC interpolators[J]. Computer Aided Geometric Design,1999,16(1):61-76.
    [82]Farouki R T, Shah S. Real-time CNC interpolators for Pythagorean-hodograph curve[J]. Computer Aided Geometric Design,1996,13(7):583-600.
    [83]Shpitalni M, Koren Y, Lo C C. Realtime curve interpolators[J]. Computer-Aided Design, 1994,26(11):832-838.
    [84]Wagenaar D A, Potter S M. Real-time multi-channel stimulus artifact suppression by local curve fitting[J]. Journal of neuroscience methods,2002,120(2):113-120.
    [85]Koninckx B, Van Brussel H. Real-Time NURBS Interpolator for Distributed Motion Control[J]. CIRP Annals-Manufacturing Technology,2002,51(1):315-318.
    [86]Lei W T, Wang S B. Robust real-time NURBS path interpolators [J]. International Journal of Machine Tools and Manufacture,2009,49(7-8):625-633.
    [87]Wang F C, Wright P K. Open architecture controllers for machine tools, part 2:a real time quintic spline interpolator[J]. Journal of Manufacturing Science and Engineering, 1994,120(2):425-433.
    [88]Zhang Q, Greenway R B. Development and implementation of a NURBS curve motion interpolator [J]. Robotics and Computer Integrated Manufacturing,1998,14 (1):27-36.
    [89]Lei W T, Sun M P, Lin L Y, Huang J J. Fast real-time NURBS path interpolation for CNC machine tools[J]. International Journal of Machine Tools and Manufacture,2007, 47(10):1530-1541.
    [90]Aydin S, Temeltas H. A novel approach to smooth trajectory planning of a mobile robot [C]. 7th International Workshop on Advanced Motion Control,2002:472-477.
    [91]Knott G, Shrager R. On-line modeling by curve-fitting[C]. ACM SIGGRAPH Computer Graphics,1972,6(4):139-151.
    [92]Farouki R T, Sakkalis T. Pythagorean hodographs[J]. IBM Journal of Research and Development,1990,34:736-752.
    [93]Farouki R T, Neff C A. Hermite interpolation by Pythagorean-hodograph quintics[J]. Math. Comput,1995,64:1589-1609.
    [94]Farouki R T, Shah S. Real-time CNC interpolators for Pythagorean-hodograph curves [J]. Computer Aided Geometric Design,1995,13(7):583-600.
    [95]Horsch T, Juttler B. Cartesian spline interpolation for industrial robots [J]. Computer Aided Design,1998,30(3):217-224.
    [96]Yao H T, Wang J B. Fast Bezier interpolator with real-time lookahead function for high-accuracy machining[J]. International Journal of Machine Tools & Manufacture,2007, 47:1518-1529.
    [97]Zheng C, Su Y, Muller PC. Simple online smooth trajectory generations for industrial systems[J]. Mechatronics,2008,19(4):571-576.
    [98]Rega E A. Anatomical considerations in facial MoCap[C]. ACM SIGGRAPH Computer Graphics, 2009,43(2).
    [99]周伟.汉语音同步的真实三维人脸动画研究[D].合肥:中国科学技术大学,2008.
    [100]Parke F, Waters K. Computer Facial Animation. A. K. Peters,1996.
    [101]Erol F. Moldeling and animating personalized faces[D]. Bilkent:Bilkent University, 2002.
    [102]Gladilin E. Biomechanical modeling of soft tissue and facial expressions for craniofacial surgery planning[D]. Free University Berlin,2003.
    [103]Noh J, Neumann U. Expression Cloning[C]. Proceedings of the 28th annual conference on Computer graphics and interactive techniques,2001:277-288.
    [104]Lorenzo M S, Edge J D, King S A, and Maddock S. Use and re-use of facial MoCap data[C]. Proceedings Vision, Video and Graphics,2003:135-142.
    [105]姚健,王阳生,丁宾等.基于球面参数化的人脸动画重映射[J].中国图像图形学报,2009,14(7):1406-1412.
    [106]Pighin F, Lewis J P. Facial Motion Retargeting[C]. Siggraph 2006 course notes: Performance-driven Facial Animation.2006.
    [107]Blanz V, Vetter A. morphable model for the synthesis of 3d faces[C]. SIGGRAPH'99: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA,1999:187-194.
    [108]Blanz V, Basso C, Poggio T, et al. Reanimating faces in images and video[C]. Comput Graph Forum,2003,22(3):641-650.
    [109]Kalberer G A, Gool L V. Realistic face animation for speech[J]. Journal of Visualization and Computer Animation,2002,13:97-106.
    [110]Pandzic I S. Facial motion cloning[J]. Graphic models,2003,65:385-404.
    [111]Pyun H, Kim Y, Chae W, et al. An Example-Based Approach for Facial Expression Cloning[C]. Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. Eurographics Association,2003:167-176.
    [112]Yano K, Harada K. A facial expression parameterization by elastic surface model[J]. International Journal of Computer Games Technology,2009,2009.
    [113]Krinidis S, Buciu I, Pitas I. Facial Expression Analysis and Synthesis:a survey[C]. Proc of the 10th International Conference on Human-Computer Interaction, 2003:1423-1436.
    [114]Deng Z, Noh J. (2008). Computer Facial Animation:A Survey[M]. Data-Driven 3D Facial Animation,2007:1-28.
    [115]Muller M, Heidelberger B, Teschner M, et al. Meshless deformations based on shape matching[J]. ACM Transactions on Graphics,2005,24(3):471-478.
    [116]Turk G,O'Brien J F. Shape transformation using variational implicit functions[C]. Proceedings of SIGGRAPH 99,1999:335-342.
    [117]Wang Y S, Zhuang Y T, Xiao J, et al. A Piece-Wise Learning Approach to 3D Facial Animation[C]. Lecture Notes in Computer Science. Advances in Web Based Learning-ICWL 2007. Springer Berlin/Heidelberg,2007.
    [118]Terzopoulos D, Waters K. Physically-Based Facial Modeling, Analysis, and Animation[J]. Journal of Visualization and Computer Animation,1990,1(2):73-80.
    [119]Lee Y, Terzopoulos D, Waters K. Realistic modeling for facial animation[C]. Proceedings of the 22nd annual conference on Computer graphics and interactive techniques,1995:55-62.
    [120]Chadwick J, Haumann D, Parent R. Layered construction for deformable animated characters[C]. Procedings of SIGGRAPH89, Boston, Massachusetts, USA,1989:243-252.
    [121]Turner R, Thalmann D. The elastic surface layer model for animated character construction[C]. Procedings of SIGGRAPH93, Lausanne, Switzerland,1993:399-412.
    [122]Kahler K, Haber J, Yamauchi H, et al. Head shop:generating animated head models with anatomical structure[C]. SIGGRAPH/Eurographics symposium on Computer animation,2002.
    [123]Joshi P, Tien W C, Desbrun M, et al. Learning Controls for Blend Shape Based Realistic Facial Animation[C]. Eurographics/SIGGRAPH Symposium on Computer Animation 2003.
    [124]Pighin F, Hecker J, Lischinski D, et al. Synthesizing realistic facial expressions from photographs[C]. Proceedings of SIGGRAPH 98. Computer Graphics Proceedings, Annual Conference Series, San Antonio, TX,1998:75-84.
    [125]Lewis J P, Cordner M, Fong M. Pose space deformation:A unified approach to shape interpolation and skeleton-driven deformation[C]. Proceedings of ACM SIGGRAPH 2000. Computer Graphics Proceedings, Annual Conference Series, New Orleans, LO, 2000:165-172.
    [126]Parke F I. A model for human faces that allows speechs synchronized animation[C]. The 1st annual conference on Computer graphics and interactive teehniques, Boulder, Colorado,1974,2.
    [127]Parke F I. Parameterized Models for Facial Animation[J]. IEEE Comput. Graph. Appl. 1982,2(9):61-68.
    [128]Platt S M, Badler N I. Animating facial expressions[C]. SIGGRAPH Comput. Graph.1981, 15(3):245-252.
    [129]Waters K. A muscle model for animation three-dimensional facial expression[C]. International Conference on Computer Graphics and Interactive Techniques,1987:17-24.
    [130]Thalmann N, Primeau N, Thalmann D. Abstract muscle action procedures for human face animation[J]. Visual Computer.1988,3(5):290-297.
    [131]Thalmann N M, Cazedevals A, Thalmann D. Modelling Facial Communication between an Animator and a Synthetic Actor in Real Time[J]. Modeling in Computer Graphics, 1993:387-396.
    [132]Sederberg T W, Parry S R. Free-form deformation of solid geometric models[C]. SIGGRAPH Comput. Graph.1986,20(4):151-160.
    [133]Coquillart S. Extended free-form deformation:a sculpturing tool for 3D geometric modeling[C].17th annual conference on Computer graphics and interactive techniques, Dallas, TX, USA,1990:187-196.
    [134]Wang C L Y, Forsey D R. Langwidere:a new facial animation system[C]. Proceedings of Computer Animation apos, Geneva, Switzerland,1994:59-68.
    [135]Tony D, Michael K, Truong T. Subdivision surfaces in character animation[C]. International Conference on Computer Graphics and Interactive Techniques,1998:85-94.
    [136]Reeves W. Simple and complex facial animation:Case studies[C]. SIGGRAPH'90 1990:88-106.
    [137]Eisert P, Girod B. Analyzing Facial Expressions for Virtual Conferencing[J]. IEEE Comput. Graph. Appl.1998,18(S):70-78.
    [138]Kanatani K. Analysis of 3-D Rotation Fitting[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1994,16,5:543-549.
    [139]Ponamgi M, Manocha D, Lin M. Incremental Algorithms for Collision Detection Between Polygonal Models[J]. IEEE Transactions on Visualization and Computer Graphics,1997, 3(1):51-64.
    [140]van den Bergen G. Efficient collision detection of complex deformable models using AABB trees[J]. Journal of Graphics Tools,1997,2(4):1-13.
    [141]Hubband P M. Collision Detection for Interactive Graphics Applications[J]. IEEE Transaction on Visualization and Computer Graphics,1995,1(3):218-230.
    [142]Gottschalk S, Lin M C, Manocha D. OBB tree:A hierarchical structure for rapid interference detection[C]. Proceedings of the ACM SIGGRAPH 1996, New Orleans, USA, 1996:171-180.
    [143]Barequet G, Har-Peled S. Efficiently approximating the minimum-volume bounding box of a point set in three dimensions[J]. Journal of Algorithms,2001,38(1):91-109.
    [144]Kay T L, Kajiya J T. Ray Tracing Complex Scenes[J]. Computer Graphics,1983, 20(4):269-278.
    [145]Horn B K P. Closed-form solution of absolute orientation using unit quaternions [J]. Journal of the Optical Society of America A:Optics, Image Science, and Vision,1998, 5(7):1227-1135.
    [146]Dorfmuller-Ulhaas K. Robust optical user motion tracking using a kalman filter[C]. 10th ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan,2003.
    [147]Grochow K, Martin S L, Hertzmann A, Popov i c Z. Style-based inverse kinematics [J]. ACM Transactions on Graphics (TOG),2004,23(3):522-531.
    [148]Herda L, Fua P, Plankers R, et al. Using skeleton-based tracking to increase the reliability of optical MoCap[J]. Human Movement Science,2001,20(3):313-341.
    [149]Hornung A, Sar-Dessai S, Kobbelt L. Self-calibrating optical motion tracking for articulated bodies[C]. Proceedings of Virtual Reality, RWTH Aachen Univ., Germany, 2005:75-82.
    [150]Hsieh C C, Kuo P L. An impulsive noise reduction agent for rigid body motion data using B-spline wavelets[J]. Expert Systems with Applications,2008,34(3):1733-1741.
    [151]Li B, Meng Q, Holstein H. Articulated motion reconstruction from feature points[J]. Pattern Recognition,2008,41(1):418-431.
    [152]Moeslund T B, Granum E. Multiple cues used in model-based human MoCap[J]. The Fourth International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000:362-367.
    [153]Welch G, Bishop G, Vicci L, Brumback S, Keller K, Colucci D. The HiBall tracker: High-performance wide-area tracking for virtual and augmented environments [J]. Virtual Reality Software and Technology, VRST, ACM,1999:1-10.
    [154]Grewal M S, Andrews A P. Kalman Filtering:Theory and Practice Using Matlab, Second Edition[M]. John Wiley & sons, Ltd,2001.
    [155]Kalman R E. A new approach to linear filtering and prediction problems [J]. Transaction of the ASME, Journal of Basic Engineering,2000:33-45.
    [156]付梦印,邓志红,张继伟.Kalman滤波理论及其在导航系统中的应用[M].科学出版社,2003.
    [157]王宏禹.随机数字信号处理[M].科学出版社,1988.
    [158]Sinopoli B, Schenato L, Franceschetti M, et al. Kalman Filtering With Intermittent Observations[J]. IEEE Transactions on Automatic Control,2004,49(9):1453-1464.
    [159]Liu X, Goldsmith A. Kalman Filtering with Partial Observation Losses[C]. IEEE Conference on Decision and Control,2004,4:4180-4186.
    [160]Yu X, Xu C, Tian Q, et al. A ball tracking framework for broadcast soccer video[C]. International Conference on Multimedia and Expo,2003,2:265-268.
    [161]Farin, G. Curves and surfaces for computer aided geometric design[M]. Academic. Press, Boston, Mass,1990.
    [162]Bohm W, Far in G, Kahmann J. A survey of curve and surface methods in CAGD[J]. Computer Aided Geometric Design,1984,1(1):1-60.
    [163]FaceGen Modeler软件主页[OL].[2010-03-24].www. FaceGen. com.
    [164]CuriousLib公司主页[OL].[2010-03-24].http://www.curiouslabs.com.
    [165]Somasundaram A, Parent R. A Facial Animation System for Expressive Audio-Visual Speech[R]. OSU-CISRC-4/06-TR46, Department of Computer Science and Engineering, The Ohio State University,2006.
    [166]Carr J C, Beatson R K, Cherrie J B, et al. Reconstruction and Representation of 3D Objects with Radial Basis Functions[C]. Proceedings of the 28th annual conference on Computer graphics and interactive techniques,2001:67-76.
    [167]Turk G,O'Brien J F. Modelling with implicit surfaces that interpolate[J]. ACM Transactions on Graphics,2002,21 (4):855-873.
    [168]Savchenko V V, Pasko A A, Okunev O G, et al. Function representation of solids reconstructed from scattered surface points and contours[C]. Computer Graphics Forum, 1995,14(4):181-188.
    [169]Morse B S, Yoo T S, Chen D T, et al. Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions. Proc. of Shape Modeling & Applications,2001:89-98.
    [170]Ohtake Y, Belyaev A, Seidel H P. A multiscale approach to 3D scattered data interpolation with compactly supported basis fuctions[C]. Proc. of Shape Modeling International 03,2003:153-161.
    [171]Buhmann M D. Radial basis functions[M]. Acta Numerica, Cambridge University Press, 2000:1-38.
    [172]Botsch M, Kobbelt L. Real-Time Shape Editing using Radial Basis Functions[C]. EUROGRAPHICS 2005,2005,24(3):611-621.
    [173]Enciso R, Li J, Fidaleo D, et al. Synthesis of 3D Faces[C]. International Workshop on Digital and Computational Video,2000.
    [174]Ulgen F. A Step Toward Universal Facial Animation via Volume Morphing[C].6th IEEE International Workshop on Robot and Human communication,1997:358-363.
    [175]Jin S, Lewis R R, West D. A Comparison of Algorithms for Vertex Normal Computation[J]. The Visual Computer,2005,21:71-82.
    [176]Fernando R. GPU Gems[M]. Pearson Higher Education,2006.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700