用户名: 密码: 验证码:
汉语语音同步的真实感三维人脸动画研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
具有真实感的语音同步人脸动画是当今计算机图形学领域的一个热点问题。它在人机交互、娱乐、影视制作和虚拟现实等方面有着非常多的应用。在过去的三十年中,相关领域取得了长足的发展与进步,但仍存在许多问题亟待解决。其中,如何获得具有高真实感的语音同步人脸动画是一个富于挑战性的课题。该课题涉及个性化人脸的运动学和动力学建模和表示、协同发音机制的建模和表示以及语音驱动人脸动画的主客观评估等诸问题。
     本文从以下几个方面对语音驱动人脸动画这一富于挑战性的研究课题进行了重点研究。
     首先,本文在Waters肌肉模型的基础上提出了一种新的嘴唇肌肉模型。针对Waters模型过于简单,不能对复杂的面部动作进行有效建模的问题,本文参考面部解剖学的相关研究成果,提出了一种能够精确描述嘴唇运动的新的嘴唇肌肉模型。该模型将嘴唇的整体运动分解为若干个子运动,并通过各子运动之间的线性组合来表示嘴唇的整体运动。为了合成说话人脸,首先在嘴唇上标记出一些特征点并通过这些特征点获取一组用来描述嘴唇运动的参数。在此基础上,建立嘴唇的运动模型。然后,利用上述嘴唇运动模型和与之相关联的线性肌肉模型,合成各种说话口型。实验结果表明,该嘴唇模型计算代价低、实用性强,是一种有效的嘴唇模型。利用该模型可以合成具有一定真实感的口型动画。
     其次,在汉语普通话三音子模型和协同发音相关研究成果的基础上,本文提出了一种上下文相关的可视语音协同发音模型。该模型将基于规则集的方法和基于学习的方法进行结合,充分利用两种方法的优点来获得具有真实感的人脸语音动画。我们的模型关注于汉语普通话协同发音的视觉效果。为了得到关键的合成口型,建立了可视语音的协同发音规则集。各音子的相应视位权重可由量化的规则集计算得出。在此基础上,可以合成对应于各音子的口型序列。然后,利用基于学习的方法,从所有的可能选择中获得合成的两音子间的过渡口型,最终得到具有真实感的人脸语音动画。
     此外,本文还提出了一种新的与语速相关的嘴唇运动模型。在连续语流状态下语速对嘴唇运动的速度和幅度都有很大的影响。研究表明,一些说话人在保持运动速度相对不变的条件下通过增大嘴唇运动幅度来达到增加语速的效果,而另一些人则在保持嘴唇运动幅度不变的前提下通过增大运动速度来实现提高语速的目的。也有一些人通过同时调节嘴唇的运动幅度和运动速度两种参数实现对语速的控制。这表明,在不同的语速条件下,不同人的唇动策略有所不同。基于上述背景知识,本文提出了一种新的具有高度自然度和个性化特征的、与语速相关的嘴唇运动模型。这里,将嘴唇肌肉区域看作一个独立的粘弹性系统,根据EMG信号与语速以及肌肉收缩力之间存在的观测数据得到皮肤肌肉组织和语速以及肌肉收缩力之间的定量关系。依据该嘴唇运动模型,我们构建了一个汉语普通话人脸动画系统。
     最后,为了对所构建的语音同步人脸动画系统的质量进行评估,本文提出了一种用于汉语可视语音动画质量评估的系统化方法。该方法主要由两种测试步骤构成:可接受性测试与可理解性测试。在可接受性测试中,使用了诊断的可接受性测量方法,并添加进了测试和客观评估相结合的方法。在可理解性评估中,提出了一种新的可视汉语改进押韵测试方法。在该方法中,通过引入“惩罚”与“原谅”因子以模拟人们对于说话人脸的感知。综合两种测试方法可以得到对所提出的三维人脸语音动画系统的总体评估。
     在前述研究的基础上,我们设计并实现了一个汉语三维人脸语音动画演示系统。该演示系统可以根据所输入的语音和特定人的三维人脸模型生成具有真实感的个性化说话人脸动画。
Realistic synchronized speech facial animation is a heated issue in the field of Computer Graphics and has a lot of applications in Human-Computer Interfaces, Entertainment, Film & Television Production, and Virtual Reality, etc. In the past 30 years, great progress and developments have been made in speech animation. However, at present, speech animation still has a lot of problems. Therefore, how to obtain synchronized speech driven realistic facial animation is a challenge subject which concerns so many problems including the kinematic and dynamic modeling and representation of individualized face, the mechanism of co-articulation and the acoustic and perceptual evaluation of realistic synchronized speech facial animation.
     In this paper, we study the synchronized speech facial animation from the following aspects.
     Firstly, based on the Waters' muscle model, a novel lip muscle model is proposed in this paper. Establishing muscle model for human facial animation is a simple and useful approach. However, too simple muscle model, like Waters' muscle model, can not describe some complicated moving facial expressions naturally. So we proposed a new lip muscle model, which perfects the description of the complicated lips' muscle movements which are not accurate in the Waters' model. According to facial anatomy, the global lip movement is divided into a few sub-movements. These sub-movements are the basic units for the description of the global lip movement. The reconstruction of the lip movement is based on the linear combination of the sub-movements. In the application of modeling talking face, several feature points are marked to get a group of lips parameters. All kinds of lip shapes are synthesized by using the proposed lip muscle model and the adjacent linear muscle model. The experimental results show that the proposed model is practical in view of its low computational cost and ability of producing all kinds of realistic synthesized lip shapes.
     Secondly, based on the previous researches on Chinese mandarin triphone model and co-articulation, a context-dependent visual speech co-articulation model is proposed in this paper. This approach combines the advantages of rule-based and learning-based methods to get realistic speech animation. Our presented model focuses on the visual effect of Chinese mandarin co-articulation. In order to get the key synthesized lip shapes in continuous speech, the rule set of the visual speech co-articulation is constructed and the phones' corresponding visemes weights are calculated by the quantized rule set. We synthesize a sequence of phones' corresponding lip shapes by using our muscle-based facial model. To produce realistic speech animation, a learning-based approach is used to acquire optimal synthesized transition lip shapes between two phones from all possible selections.
     Thirdly, a novel lip movement model related to speech rate is proposed in this paper. In continuous speech, speech rate has a strong effect on the velocity and amplitude of lip movement. At different speech rates, different people select different strategies of lip movement. For increased rate, some speakers decrease amplitude but maintain the velocity of the movement; others increase velocity while maintaining amplitude; and others make adjustments in both parameters. Therefore, according to the above research background, a novel lip movement model related to speech rate, which has high degree of individuality and naturalness, is proposed. According to the former researches, there exists a closed relation between EMG signal and speech rate as well as a relation between EMG signal and muscle force. Also, the area which covers lip muscle can be considered as an independent viscoelastic system. So the model is constructed based on the research results on the viscoelasticity of skin-muscle tissue and the quantitative relationship between lip muscle force and speech rate. In order to show the validity of the model, we have applied it to our Chinese speech animation system.
     Finally, in order to evaluate the quality of the synthesized speech animation system, a systemic evaluation approach of visual Chinese speech animation is proposed in this paper. Basically the approach consists of two main tests: acceptability test and intelligibility test. In acceptability test, the diagnostic acceptability measure approach has been used and the objective evaluation ingredient has been added. In intelligibility test, a novel approach called Visual Chinese Modified Rhyme Test, which is based on the previous Chinese Modified Rhyme Test in synthesized speech evaluation and focuses Chinese speech animation, has been proposed in this paper. At the same time, the factors of "punishment" and "forgiveness" are introduced to simulate the people's perception. At last, the synthesized evaluation result of the 3D speech animation system is concluded in this paper.
     According to the above researches, a Chinese Synchronized Speech Animation Demo System is constructed and a natural and realistic talking head is synthesized in this demo system.
引文
[Albrecht 2002] Irene Albrecht, Jorg Haber, Kolja Kahler, Marc Schroder, and Hans-Peter Seidel.'May I talk to you? :-)'— Facial Animation from Text [C]. Proc. Pacific Graphics. Beijing, China. 2002: 77-86.
    
    
    [Basmajian 1985] J.V.Basmajian, C.J.Deluca. Muscles alive: Their functions revealed by electro- myography (5~(th) ed.)[M]. Baltimore: Williams & Wilkins, 1985.
    
    [Bergeron 1985] P. Bergeron and P. Lachapelle. Controlling facial expressions and body movements. In Advanced Computer Animation [C]. SIGGRAPH'85 Tutorials, ACM, New York.Volume 2, 1985:61-79.,.
    
    [Bernstein 2000] L.E.Bernstein, M.E.Demorest, and Tucker, P.E. Speech perception without hearing[J]. Perception & Psychophysics, Vol. 62(2), 2000: 233-252.
    
    [Bernstein 2003] L E. Berenstein. Visual speech perception. In Audiovisual Speech Processing[M]. E. Vatiokis-Bateson, G. Bailly & P. Perrier (Eds.). 2003
    
    [Blanz 1999] V Blanz, T Vetter. A Morphable Model for the Synthesis of 3D Faces [C]. SIGGRAPH'99 Conf. Proc. Los Angeles, USA, 1999: 187-194.
    
    [Blanz 2003] Volker Blanz, Curzio Basso, Tomaso Poggio, Thomas Vetter: Reanimating Faces in Images and Video[C]. Comput. Graph. Forum 22(3), 2003: 641-650.
    
    [Bourne 1973] G.H.Bourne. Structure and function of muscle. In Physiology and Biochemistry[M]. Second edition, Volume III. Academic Press, New York, 1973.
    
    [Brand 1999] M. Brand. Voice puppetry. In Proceedings of ACM SIGGRAPH 1999. ACM Press/Addison-Wesley Publishing Co. 1999: 21-28.
    
    [Bregler 1997] C. Bregler, M. Covell, and M. Slaney. Video Rewrite: Driving Visual Speech with Audio[C]. Proc. SIGGRAPH 97, Los Angeles, CA, 1997: 353-360.
    [曹1996]曹剑芬.普通话语音的环境音变与双音子和三音子结构[J].语言文字应用.1996,(2):58-63.
    [曹1997]曹剑芬.普通话双音子和三音子结构系统代表语料集[J].语言文字应用.1997,(1):60-68.
    [Cassell 1994]J.Cassell,C.Pelachaud,N.Badler,M.Steedman,B.Achorn,W.Becket,B.Douville,S.Prevost,AND M.Stone.Animated conversation:Rule-based generation of facial expression,gesture and spoken intonation for multiple conversational agents[C].In Proceedings of ACM SIGGRAPH,1994:413-420.
    [Chai 2003]Chai J X,Xiao J,Hodgins J.Vision-based Control of 3D FacialAnimation[C].Eurographics/SIGGRAPH Symposiumon Computer Animation,2003:193-206.
    [陈2003]陈益强,高文,王兆其,等.基于机器学习的语音驱动人脸动画方法[J].软件学报,2003,14(2):215-221.
    [Chernoff 1971]H.Chernoff.The use face to represent points in n-dimensional space graphically[R].Technical Report Project NR-042-993,Office of Naval Research,Washington DC,December 1971.
    [Cohen 1990]M.Cohen and D.Massaro.Synthesis of Visible Speech[J].Behavioral Research Methods and Instrumentation.1990,22(2):260-263.
    [Cohen 1993]M.Cohen and D.Massaro.Modeling Coarticulation in Synthetic Visual Speech[M].In N.M.Thalmann and D.Thalmann,editors,Models and Techniques in Computer Animation.Springer-Verlag,1993.
    [Cohen 1994]M.Cohen and D.Massaro.Development and experimentation with synthetic visual speech[J].Behavioral Research Methods,Instrumentation,and Computers.1994,26:260-265.
    [Cohen 1996]M.M.Cohen,R.L.Walker,and D.W.Massaro.Perception of synthetic visual speech[M].In:Speech reading by humans and Machines,D.G.Stroke and M.E.Hennecke(Eds.),New York:Springer 1996:153-168.
    [De Luca 1997]De Luca CJ.The use of surface electromyography in bio-mechanics[J].J Appl Biomech.1997,13:135-163.
    [Ekman 1978] P. Ekman and W. V. Friesen, Manual for the Facial Action Coding System[M]. Consulting Psychologists Press, Inc., Palo Alto, CA, 1978.
    
    [Epstein 2002] M.Epstein, N. Hacopian, P. Ladefoge. Dissection of the Speech Production Mechanism [M], Los Angeles: The UCLA Phonetics Laboratory, 2002: 12-15.
    
    [Ezzat 1998] T. Ezzat T. Poggio. MikeTalk: A talking facial display based on morphing visemes[C]. In Proc.Computer Animation Conference, Philadelphia, USA, 1998: 456-459.
    
    [Ezzat 2000] T. Ezzat, and T. Poggio. Visual speech synthesis by morphing visemes. International Journal of Computer Vision, 38, 2000: 45-57.
    
    [Ezzat 2002] T. Ezzat, G. Geiger and T. Poggio. Trainable videorealistic speech animation[J], ACM Transactions on Graphics, 2002, 21(3): 388-398.
    
    [Fung 1993] Y. Fung. Biomechanics: Mechanical Properties of Living Tissues[M]. Slringer Verlag, 1993.
    
    [Gillenson 1974] M.L.Gillenson. The Interactive Generation of Facial Images on a CRT Using a Heuristic Strategy[D]. PhD thesis, Ohio State University, Computer Graphics Research Group, Columbus, OH, March 1974.
    
    [Guiard-Marigny 1994] T. Guiard-Marigny, A.Adjoudani, and C.Benoit. A 3D model of the lips for visual speech synthesis[C]. In Proc. 2~(nd) ETRW on Speech Synthesis, New Platz, New York 1994: 49-52.
    
     [Hill 1988] D. R. Hill, A. Pearce, and B. Wyvill. Animating speech: an automated approach using speech synthesis by rules[J]. The Visual Computer, 1988, 3: 277-289,.
    
    [Horn 1981] B.K.P Horn and B.G. Schunk. Determining optical flow[J]. Artificial Intelligence, 1981, 17:185-203.
    [贾 2000]贾云得.机器视觉[M].北京:科学出版社,2000:235-239.
    [Kaiberer 2002]G.A.Kalberer,P.Mueller,and L.V.Gool.Speech animation using viseme space[C].In Vision,Modeling,and Visualization VMV 2002.Akademische Verlagsgesellschaft Aka GmbH,Berlin,Germany.2002:463-470.
    [Kalra 1991]P.Kalra,A.Mangili,N.Magnenat-Thalmann,and D.Thalmann.SMILE:a multi layered facial animation system[C].In IFIP WG,Tokyo,1991:189-198.
    [康 2003]康恒,刘文举.基于综合因素的汉语连续语音库语料自动选取[J],中文信息学报,2003,17(4):27-32.
    [Kent 1977]R.D.Kent and F.D.Minifie.Coarticulation in recent speech production models[J].Journal of Phonetics,1977,5:115-135.
    [Kirby 1990]M.Kirby,L.Sirovich.Application of the Karhunen-Loeve procedure for the characterization of human faces.IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,12(1):103-108.
    [Kshirsagar 2000]S.Kshirsagar,and N.Magnenat-Thalmann.Lip Synchronization Using Linear Predictive Analysis[C],Proceedings of IEEE International Conference on Multimedia and Expo,New York,USA,2000:1077-1080.
    [Kuehn 1976]D.P.Kuehn,K.L.Moll.A cineradiographic study of VC and CV articulatory velocities[J].Journal of Phonetics,1976,4:303-320.
    [Lee 1995]Y.C.Lee,D.Terzopoulos,and K.Waters.Realistic face modeljng for animation[C].In Proceedings of SIGGRAPH'95,1995:55-62.
    [Le Goff 1994]B.Le Goff,T.Guiard-Marigny,M.Cohen,and C.Benoit.Real-time analysis-synthesis and intelligibiliIy of talking faces[C].In Proc.2~(nd)ETRW on Speech Synthesis,New Platz,New York,1994:53-56.
    [Lewis 1987]J.P.Lewis and F.I.Parke.Automatic lip-synch and speech synthesis for character animation[C].In Proc.Graphics Interface'87 CHI+CG'87,Canadian Information Processing Society,Calgary,1987:143-147.
    [Li 2000]Z.Li,E.C.Tan,I.McLoughlin,T.T.Teo.Proposal of standards for intelligibility test of Chinese speech[J],IEE Proc.-Vis.Image Signal Process,2000,147(3):254-260.
    [林 1999]林焘,王理嘉.语音学教程[M].北京:北京大学出版社,1999.
    [刘 2002]刘关松,陆宗骐,徐建国等.几种彩色模型在不同光照条件下的稳定性分析[J].小型微型计算机系统,2002,23(7):882-885.
    [Liu 2003]LIU Wen-tao,YIN Bao-cai,JIA Xi-bin,KONG De-hui.A Realistic Chinese Talking Face[C],1~(st)Indian International Conference on Artificial Intelligence(ⅡCAI-03)2003:1244-1254.
    [Lofqvist 1990]A.Lofqvist.Speech as audible gestures[M].In W.J.Hardcastle and A.Marchal,editors,Speech Production and Speech Modeling,.Kluwer Academic Publishers,Dordrecht,1990:289-322.
    [Magnenat-Thalmann 1988]N.Magnenat-Thalmann,N.E.Primeau,and D.Thalmann.Abstract Muscle Action Procedures for Human Face Animation[J],The Visual Computer,3(5):290-297,March 1988.
    [McGurk 1976]Harry McGurk and John MacDonald.Hearing lips and seeing voices[J].Nature 1976,264,746-748.
    [梅 2000]梅丽,鲍虎军,郑文庭,彭群生.基于实拍图像的人脸真实感重建[J].计算机学报,2000,23(9):996-1002
    [梅 2001]梅丽,鲍虎军,彭群生.特定人脸的快速定制和肌肉驱动的表情动画[J].计算机辅助设计与图形学学报,2001,13(12):1077-1082
    [Morishima 1993]S.Morishima and H.Harashima.Facial animation synthesis for human-machine communication system[C].In Proc.5~(th)International Conf.on Human-Computer Interaction,ACM,New York,Volume Ⅱ,1993:1085-1090.
    [Moubaraki 1996]L.Moubaraki,J.Ohya.Realistic 3D Mouth Animation Using a Minimal Number of Parameters[C],IEEE International Workshop on Robot and Human Communication,Tsukuba,Japan,1996:201-206.
    [Ohta 1990] OHTA Naoya. Optical flow detection by color images[J]. NEC Research and Development, 1990,97: 78-84.
    
    [Ohta 1996] OHTA Naoya. Uncertainty models of the gradient constraint for optical flow computation[J]. IEICE Transactions on Information and Systems, 1996, E79-D(7): 958-964.
    [Ostry 1985] D. J. Ostry, K. G. Munhall. Control of rate and duration of speech movements[J]. Journal of the Acoustical Society of America, 1985, 77: 640-648.
    
    [Pandzic 1999] Pandzic, I.S., Ostermann, J. and Millen, D. User evaluation: Synthetic talking faces for interactive services[J]. The Visual Computer, 1999, 15: 330-340.
    
    [Papamichalis 1987] P. E. Papamichalis. Practical approaches to speech coding[M]. Prentice Hall, Englewood Cliffs. NJ, 1987.
    
    [Parke 1972] F. I. Parke. Computer generated animation of faces[D]. Master's thesis, University of Utah, Salt Lake City, UT, June 1972. UTEC-CSc-72-120.
    
    [Parke 1974] F. I. Parke. A Parameteric Model for Human Faces[D]. PhD thesis, University of Utah, Salt Lake City, UT, December 1974, UTEC-CSc-75-047.
    
    [Parke 1975] F.I.Parke. A model for human faces that allows speech synchronized animation[J]. Journal of Computers and Graphics, 1975, 1(1):1-4.
    
    [Parke 1982] F. I. Parke. Parameterized models for facial animation[J] IEEE Computer Graphics, 1982, 2(9): 61-68.
    
    [Parke 1990] F. I. Parke, editor. State of the Art in Facial Animation[C], SIGGRAPH '90, Course Notes #26. ACM, New York, August 1990.
    
    [Parke 1991] F. I. Parke. Control Parameterization for facial animation[M]. Computer Animation, Tokyo: Springer-Verlag, 1991: 3-13.
    
    [Parke 1996] F. I. Parke. K. Waters. Computer Facial Animation[M]. Wellesley, MA: A. K. Peters, 1996: 1-365.
    
    [Pearce 1986] A. Pearce, B. Wyvill, G. Wyvill, and D. Hill. Speech and expression: A computer solution to face animation[C]. In Proc. Graphics Interface'86, Canadian Information Processing Society, Calgary, 1986: 136-140.
    [Pelachaud 1991]C.Pelachaud.Communication and Coarticulation in Facial Animation[D].PhD thesis,University of Pennsylvania,Philadelphia,October 1991.Technical Report MS-CIS-91-77.
    [Pelachand 1996]C.Pelachaud,N.Badler,and M.Steedman,Generating Facial Expressions for Speech[J].Cognitive Science,1996,20(1):1-46.
    [皮 1987]皮昕等.口腔解剖生理学[M].北京:人民卫生出版社,1987:111-115.
    [Platt 1980]S.M.Platt.A system for computer simulation of the human face[D].Master's thesis,The Moore School,University of Pennsylvania,Philadelphia,1980.
    [Platt 1981]S.M.Platt and N.I.Badler.Animating facial expressions[J].Computer Graphics,1981,15(3):245-252.
    [Quackenbush 1988]S.R.Quackenbush,T.P.Barnwell Ⅲ,M.A.Clements.Objective measures of speech quality[M].Prentice Hall,Englewood Cliffs,1988.
    [单 2002]单卫,姚鸿勋,高文.唇读中序列口型的分类[J].中文信息学报,2002,16(1):31-36.
    [Sirovich 1987]L.Sirovich,M.Kirby.Low-dimensional procedure for the characterization of human face[J].J.Opt.Soc.Am.1987,4:519-524.
    [Song 2003]Mingli Song,Chun Chen,Jiajun Bu,and Ronghua Liang.3D Realistic Talking Face Co-driven by Text and Speech[C].IEEE International Conference on Systems,Man and Cybernetics,Washington,D.C,USA,2003:2175-2186.
    [Steeneken 1992]H.J.M.Steeneken.Quality evaluation of speech processing systems[M].In INCE,A.N.(Ed.):Digital speech processing:speech coding,synthesis and recognition.Kluwer Academic Publishers,1992.
    [Terzopoulos 1990]D.Terzopoulos,K.Waters.Physically based Facial Modeling,Analysis,and Animation[J]Journal of Visualization and Computer Animation,1990,1(4):73-80.
    [Terzopoulos 91]D.Terzopouls,K.Waters,Techniques for Realistic Facial Modeling and Animation[J]In Proceeding of Computer Animation,Geneva,Switzerland,Springer-Verlad,Tokyo,1991:59-74.
    [Terzopoulos 1993]D.Terzopouls,and K.Waters,Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1993,15(6):569-579.
    [Vatikiotis-Bateson 1996]Vatikiotis-Bateson,E.,Munhall,K.G.,Hirayama,M.,Lee,Y.C.,&Terzopoulos,D.The dynamics of audiovisual behavior in speech.[M].In:Speechreading by humans and machines(NATO-ASI Series F),D.Stork & M.Hennecke(Eds.),Berlin:Springer-Verlag,1996:150,221-232.
    [Voiers 1983]W.D.Voiers.Evaluating processed speech using the diagnostic rhyme test[J].Speech Technol,1983,30-39.
    [王2000]王志明,蔡莲红.汉语音节与口形关系的研究(C).第九届全国多媒体技术学术会议(NCM T'2000),北京,Dec.2000.
    [王2001]王奎武,王洵,董兰芳等.一个MPEG-4兼容的人脸动画系统[J].计算机研究与发展,2001,38(5):529-535.
    [王2003]王志明,蔡莲红,艾海舟.基于支持向量回归的唇动参数预测[J].计算机研究与发展,2003,40(11):1561-1565.
    [王2004]王洵,张道义,董兰芳等.三维语音动画聊天室的设计与实现[J].计算机工程与应用,2004,40(1):106-108.
    [王2005]王志明,蔡莲红,艾海舟.Text-To-Visual Speech in Chinese Based on Data-Driven Approach.软件学报,2005,16(6):1054-1063.
    [Waters 1987]K.Waters.A Muscle Model for Animating Three Dimensional Facial Expression [J]Computer Graphics(SIGGRAPH'87),1987,22(4):17-24.
    [Waters 1993]K.Waters,T.M.Levergood.DECface:An automatic lip-Synchronization algorithm for synthetic faces[R].DEC Cambridge Research Laboratory,1993.
    [Web 2007]http://www.dynastat.com/
    [Web 2008]http://hwr.nici.kun.nl/~miami/taxonomy/node120.html
    [Williams 1990]L.Williams.Performance Driven Facial Animation[J]Computer Graphics (ACM SIGGRAPH'90),1990,24(4):235-242.
    [Wohlert 2000]A.B.Wohlert,V.L.Hammen.Lip muscle activity related to speech rate and loudness[J].Journal of Speech,Language,and Hearing Research,2000,43:1229-1239.
    [Wu 1994]Y.Wu,N.Magnenat-Thalmann,and D.Thalmann.A Plastic-Visco-Elastic Model for Wrinkles in Facial Animation and Skin Aging[C].In Proc.Pacific Graphics '94,1994:201-214.
    [Wyvill 1988]B.Wyvill,D.R.Hill,and A.Pearce.Animating speech:An automated approach using speech synthesized by rules[J].The Visual Computer,1988,3(5):277-289.
    [徐 1980]徐世荣.普通话语音知识[M].北京:文字改革出版社,1980.
    [徐 2004]徐成华,王蕴红,谭铁牛.三维人脸建模与应用[J].中国图形图像学报,2004,9(8):893-903.
    [晏 1998]晏洁.文本驱动的唇动合成系统[J].计算机工程与设计,1998,19(1):31-34.
    [晏 1999a]晏洁.具有真实感的三维人脸合成方法的研究与实践[D].哈尔滨:哈尔滨工业大学,1999.
    [晏1999b]晏洁,高文.基于模型的头部运动估计和面部图像合成[J].计算机辅助设计与图形学学报,1999,11(5):389-394.
    [叶2004]叶静,董兰芳,王洵.用于语音动画合成的语音特征提取和聚类技术[J].微型机与应用,2004,23(8):47-49.
    [Yin 1997]B.C Yin,W.Gao.Radial Basis Function Interpolation on Space Mesh[C].Virtual Proceedings of ACM SIGGRAPH97.1997:150.
    [尹1998]尹宝才,高文.利用Bezier曲面的面部表情和口型几何造型[J].计算机学报,1998,21,Suppl:347-350.
    [尹1999]尹宝才,高文.基于模型的头部运动估计和面部图象合成[J].计算机研究与发展,1999,36(1):67-71.
    [Zhang 2004]Y.Zhang,E.C.Prakash,E.Sung.A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh[J].IEEE Trans.on Visualization and Computer Graphics,2004,10(3):339-352.
    [郑2002]郑慧娆,陈绍林,莫忠息等.数值计算方法[M].武汉:武汉大学出版社,2002:310-313.
    [祖1999]祖漪清.汉语连续语音数据库的语料设计[J].声学学报,1999,24(3):236-247.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700