仿人头像机器人人工情感建模与实现的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着仿人机器人应用领域的不断扩展,人类对仿人机器人具有“情感”的需求越来越高。仿人头像机器人是仿人机器人研究领域中实现人机情感交互的重要方向。情感能够提高机器人的便利性和可信度,同时可以向使用者提供机器人的内部状态、目标和意图等反馈信息。目前国外对该方向研究还处于不断探索和完善阶段,而在国内关于该方向的研究主要集中在情感模型理论研究,对于机器人的情感模型应用研究成果报道较少。本课题以仿人头像机器人相关技术为手段,构建仿人头像机器人软硬件研究平台,通过理论、仿真和实验对仿人头像机器人人工情感建模与实现进行深入研究。本文的研究为未来人类与机器人共存,实现与人类的交流奠定基础。
     从情感心理学的认知角度出发,建立仿人头像机器人的人工情感模型,使人工情感通过情感表达的数学模型实现。在模型中确定系数矩阵反映个性对情感的影响程度,并根据情感和情感转换实现对人工情感的描述。其中人工情感的情感交互采用基于有限状态机理论建立分层式的扩展有限状态机情感交互模型。情感交互模型的行为主要由底层的状态机完成,通过设定状态参数变量集合为机器人行为控制提供依据。在模型中,通过情感的状态转换表和状态转换矩阵研究情感有限状态机的情感转换情况;根据有限状态机与马尔可夫链的关系,计算情感有限状态机模型的平稳概率分布;通过状态转换图中的情感转换路径表达状态机的情感行为。在人工情感模型的基础上进行了情感仿真,通过仿真验证了人工情感模型的正确性,并为实验研究提供依据。
     针对情感的外界刺激,研究视觉和听觉信号的提取与识别。在视觉信号中,主要针对人脸的面部表情进行特征提取和识别。通过积分投影和基于知识的方法提取面部表情特征值,采用模糊神经网络的方法进行面部表情的识别。其中,在构建模糊神经网络结构中,采用模糊聚类的方法进行中心参数的调整,采用最小二乘法来调整连接权值;在语音信号的提取与识别中,采用连续混合高斯隐马尔可夫模型完成。通过短时平均能量和短时过零率来实现语音信号的端点检测,以12阶Mel倒谱系数和短时能量作为语音识别的静态特征参数,采用它们的一阶差分作为动态特征参数。
     在仿人头像机器人“H&Frobot-II”基础上构建具有视、听觉功能的仿人头像机器人“H&Frobot-III”。该机器人包括机器人本体、运动控制系统、感知传感器系统三部分。机器人本体实现眼球、眼睑、下颚等部件运动,基于面部表情编码系统(FACS),通过控制柔性皮肤上的特征点运动实现机器人的面部表情和动态口形。在机器人上构建CCD传感器和语音识别单片机实现机器人的视觉和听觉功能。
     仿人头像机器的人机情感交互实验研究。在仿人头像机器人的情感再现实验中,得出机器人6种基本面部表情(正常、微笑、吃惊、厌恶、悲伤和生气)和机器人的动态口形,验证了仿人头像机器人的情感表达能力。在情感再现实验的基础上进行机器人的情感人机交互实验,通过与人类的语音和面部表情的交流,机器人做出相应的反应。实验结果表明仿人头像机器人的人机情感交互能力,验证了人工情感模型的正确性和有效性。
     对仿人头像机器人人工情感建模与实现的研究结果表明人工情感模型能够较好的满足人机交互的需求。本论文的研究结果为机器人的“仿人”特性的进一步深入研究和实际应用提供参考价值。
     本文的研究内容是在国家“863计划”项目“具有表情智能的仿人全身机器人系统集成化设计及基础技术验证”(2006AA04Z201)及哈尔滨工业大学跨学科交叉基金“具有六种面部表情及视觉的类人头像机器人与行为研究”( HIT. DM.2002.0.6)共同资助下完成的。
With the development of the humanoid robot research application, human have more and more need for humaoid robot with emotion. The study of humanoid head robot is an important trend to realize human-computer interaction in the field of robotics. Emotion can improve the autonomy and flexibility of robot, and offer the user for feedback, such as internal state, goal and intention. At present, the research in artificial emotional model is at developing stage abroad. In China, the research is focused on theory of artificial emotional model research. There are few publications about application of the emotional model. In the present study, the software and hardware platform of humanoid head portrait robot were developed with some related techniques of personal robots. The modelling and realization of artificial emotion for humanoid head portrait robot was deeply researched through theory, simulation and experiment. The research is foundation for realizing huamn-machine interaction and coexistence of humans and robots.
     In the view of cognition of the psychology, artificial emotion model was presented for humanoid head portrait robot. The mathematical model of emotional expression was built to describe the emotional character. In the model, the coefficient matrix was studied to reflect the effect of emotion and personality. The emotion and emotional transiton were to descibe artificial emotion. Based on the theory of finite state machine (FSM), the hierarchical expanded finite state machine (EFSM) emotional interaction model was built. The main work in the model was finished on the bottom EFSM. In the EFSM, the variable set V was defined to offer the data for the behavior of robot. In the study of FSM, the transition table and transition matric were studied to show the emotion transition. Based on the relation FSM and Markov, the stationary probability distribution of FSM were calculaed. The emotional behavior of FSM was obtained by the transition route in the state transition graph. The simulation model was built on the basis of the artificial emotional model. The theory was verified by the simulation, which was taken as the basis for experimental study.
     Aimed to the extra-simulation, the extraction and recognition for visual and speech sign were researched in the human head robot system. In the visual sign, projection and knowledge were used to extract expression feature in the gray image. Fuzzy neural network (FNN) was used to recognize facial expression. In the structure of fuzzy neural network, fuzzy clustering method was adopted to adjust the center parameter, and Least square method was to adjuest the connection weights. In the speech sign, Continuous hide Markov model (CHMM) was to recognize speech words, short-time energy and short-time average zero-crossing rate to segment speech sign were adopted. A 12-order MFCC and short-time energy were defined as the static characteristic parameters, and first order difference of these parameters were defined as dynamic characteristic parameters.
     The humanoid head portrait robot“H&Frobot-III”with audio-visual function was developed based on the humanoid head portrait robot“H&Frobot-II”. The robot includes robot body, control system and sensor system. The robot body could realize the movement of eyeball, eyelid and lip. The facial expression and basic lip shape of robot were realized by controlling the movement of feature points in the flexible skin based on facial action coding system(FACS). In the sensor system, a CCD camera and a SPCE061A chip were use to realize robot’s visual and audtive function.
     The relative experiments were researched for emotional interaction with human and robot. Six basic facial expressions(normal, smile, surprise, digust, sad, angry) and dynamic lip were obtained by the emotional representation experiment. In these experiments, the robot was verified to express emotion through dialogue and facial expression. Based on these experiments, emotional interaction experiment was researched, the robot could respond to people’s voice and facial expression. The experiment results show that robot can realize the human-machine interaction and the validity of artifiacial emotional model.
     Theoretical and experimental research results in this paper indicate that artificial emotion model can satisfy the need of interaction for human-robot very well.
     This study is a part of the research project named“Integration Design and Basic Technology Verification in the Full Body Robot System with Expression Intelligence”grant number(2006AA04Z201). Supported by the National High-Tech and Research and Development Program of China. It is also a part of project of“Interdisciplinary Foundation of HIT:“Research on Humanoid Robot with Six Facial Expression and Visual”grant number(HIT.DM.2002.0.6).
引文
1谢涛,徐建豪,张咏学.仿人机器人的研究历史、现状及展望.机器人. 2002, 24(4):367~374
    2贾丁.仿人机器人概述.机器人技术与应用. 2002, 5:6~8
    3 Tamie Salter, Kerstin Dautenhahn, Renete Boekhorst. Learning about Natural Human-Robot International Style. Robotics and Autonomous Systems. 2006, 54(2):127~134
    4 Gerald Clore, Janet Palmer. Affective Guidance of Intelligent Agents: How Emotion Controls Cognition. Cognitive System Research. 2009,10: 6~20
    5 Georgia Pannyiotou. Emotional Dimensions Reflected in Rating of Affective Scripts. Personality and Individual Differences. 2008, (44): 58~63
    6王志良.人工心理与人工情感.智能系统学报. 2006: 39~43
    7 Christian Peter, Antje Herbon. Emotion Representation and Physiology Assignments in Digital Systems. Interacting with Computers. 2006, (18): 139~170
    8 S Wermter, M. Page, M. Knowles. Multimodal Communication in Animals, Humans and Robot: an Introduction to Perspectives in Brain-inspired Informatics. Neural Networks. 2009: 46~51
    9 JM.Fellous. Emotion: Computational Modeling. Encyclopedia of Meuraoscience. 2009: 909~913
    10 Picard R W. Affective Computing. MIT Media Lab, Technical Report. 1995:56~67
    11 Tae-Yong, Joon-Yong Lee, Jin-ho Shin, Ju-Jang Lee. Modeling of the Emotional Model with Friendship for Familiarity of Robot. IEEE International Conference on Intelligent Robots and Systems, 2005: 2437~2442
    12李允明.国外仿人机器人发展概括.机器人. 2005, 27(6): 561~568.
    13 Ill-Woo Park, Jung-Yup Kim, Jun-Ho Oh. Online Biped Walking Pattern Generation for Humanoid Robot KHR-3(KAIST Humanoid Robot– 3: HUBO). HUMANOIDS'06 - 2006 IEEE-RAS International Conference on Humanoid Robots, Genova, Italy, December 4-6, 2006:398~403.
    14 Hiroshi Kobayashi, Fumio Hara. Facial Interaction between Animated 3D Face Robot and Human beings. IEEE International Conference on Intelligent Robots and Systems , 1997: 3732~3737
    15 Ekman P, Friesen W V. Facial Action Coding System. Consulting Psychologist Press. 1978:548~571
    16 Hiroshi Kobayashi, Yoshiro Ichikawa, Masaru Senda. Toward Rich Facial Expression by Face Robot. IEEE International Symposium on Micromechatronics and Human Science, 2002: 139~145
    17 Hiroshi Kobayashi, Yoshiro Ichikawa, Masaru Senda, Taichi Shiba. Realization of Realistic and Rich Facial Expression by Face Robot. IEEE International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, 2003: 1123~1128
    18 Takuya Hashimoto, Sachio Hitramatsu, Toshiaki Tsuji, Hiroshi Kobayashi. Development of the Face Robot Saya for Rich Facial Expressions. SICE~ICASE International Joint Conference, Busan, Korea, 2006:5423~5428
    19 Kobayashi, Hara. The Recognition of Basic Facial Expression by NeutralNetwork. International Joint Conference on Neural Network. 1991:460~466
    20 Hirishi Kobayashi, Fumio Hara, Gou Uchida. Study on Face Robot for Active Human Interface-Mechanisms of Face and Facial Expressions of 6 Basic Emotion. Journal of the Robotics Society of Japan. 1994, l(12):155~163
    21 Kobayashi, Hara.Recognition of Mixed Facial Expression by Neural Network, IEEE International Workshop Conference on Robot and Human Communication, 1992: 381~486
    22 Fumio Hara. Artificial Emotion of Face Robot through Learing in Communicative Interactions with Human. 2004 IEEE International Workshop Robot and Human Interactive Communication, Okayama, Japan, 2004: 7~15
    23 Takanishi, Matsuno, Kato.Development of the Anthropomorpic Head-Eye Robot with Two Eyes, Proceedings of the IEEE/RSJ International Conference on Robots&Syetems, 1997: 799~804
    24 Takanishi, Takanobu, Kato, Umestu. Development of Anthropomorphic Head-Eye Robot WE-3R with an Autonomous Facial Expression Mechanism. IEEE International Conference on Robotics and Automation, 1999:3255~3260
    25 Miwa, Umestu, Takanishi, Takanobu. Human-like Robot Head That has Olfactory Sensation and Facial Color Expression. IEEE International Conference on Robotics and Automation, 2001: 3255~3260
    26 Hiroyan Miwa, Kazulo Itoh, Munemichi Mastsumoto. Effective Emotion Expressions with Emotion Expression Humanoid Robot WE-4RII. IEEE International Conference on Intelligent Robots and System, Sendal , Japan, 2004: 2203~2208
    27 Kazuko Itoh, Hiroyasu Miwa, Hideaki Takanobu, Atsuo Takanishi. Application of Neural Network to Humanoid Robots-Development of Coassociative Memory Model. Neural networks. 2005, (18): 666~673
    28 Cynthia Breazeal. Emotion and Sociable Humanoid Robots. InternationalJournal of Human-Computer Studies. 2003, (59):119~155
    29 Jun-Ho, Oh David Hanson, Won-Sup Kim, Ill-Woo Park. Design of Android Type Humanoid Robot Albert HUBO. IEEE International Conference on Intelligent Robots and Systems, Beijing, China, 2006:1428~1433
    30 Ill-Woo Park, Jung-Yup Kim, Baek-Kyu Cho, Jun-Ho Oh. Control Hardware Integration of a Biped Humanoid Robot with an Android Head. Robotics and Automonous System. 2008, (56): 95~103
    31 Jochen Hirth, Norbert Schmitz, Karsten Bems. Emotional Architecture for the Humanoid Robot Head ROMAN. IEEE International Conference on Robotica and Automation, Roma, Italy, 2007:2150~2155
    32鹿麟.具有视觉的仿人头像机器人及人类表情识别与再现的研究.哈尔滨工业大学工学硕士学位论文. 2006: 1~7
    33宋策.具有语音口形及表情的仿人头像机器人系统研究与实验.哈尔滨工业大学硕士论文. 2007: 1~7
    34 Alejandro Jaimes, Nicu Sebe. Multimodal Human-Computer Interaction: a Survey. Computer Vision and Image Understanding. 2007,(108):116~134
    35 Hyoung Rock Kim, kang Woo Lee, Dong Soo kwon. Emotional Interaction Model for a Service Robot. IEEE International Workshop on Robots and Human Interactive Communication, 2005:672~678
    36 Daniel S.Levine. Neural network Modeling of Emotion. Physics of Life Reviews. 2007, (4): 37~63
    37 Velasquez, J.D. Modeling Emotions and Other Motivations in Synthetic Agents. In proceeding of the 14th National Confernce on Artificail Intelligence and 9th Innovative Applications of Artificial Intelligence Conference. 2000:10~15
    38 EI.Nasr, M.S., Y.J. Flame Fuzzy Logic Adaptive Model of Emotions. Autonomius Agents and Muli-Agent Systems. 2000, (3):219~257
    39 Gratch,J. Emile. Marshalling Passions in Training and Education. Proceeding of the Fourth International Conference on Autonomous Agents, Catalonia, Spain, 2000: 325~332
    40 Ortony A, Clore G, Collins A. The Cognitive Structure of Emotions. Cambridge University Press. 1988:120~154
    41 J.Bates. The Roles of Emotion in Believable Agents. Communications of Association for Computing Machinery. 1994,37(7): 122~125
    42 Sloman A. Beyond Shallow Models of Emotion. Cognitive Processing. 2001, 2(1): 177~198
    43罗莎琳德,皮卡德.情感计算.北京理工大学出版社. 2005: 76~98
    44宋亦旭,贾培发.人工情感及其应用.控制理论与应用. 2004,21(2): 315~319
    45王国江,王志良.人工情感综述.计算机应用研究. 2006, (11):7~11
    46王宏,颉斌,解仑,王志良.基于人工心理理论的情感模型建立及其数值仿真.计算机工程与应用. 2007,43(3): 1~4
    47吴晓红,王培良,王志良.一种基于状态空间分析法的人工情感模型.微计算机信息. 2007,23(52): 243~245
    48王玉洁.基于人工心理的情感建模及人工情感交互技术研究.北京科技大学博士论文. 2007:100~123
    49滕少冬,王志良,王莉.基于心理能量思想的人工情感模型.计算机工程与应用. 2007,43(3):1~4
    50腾少冬.应用于个人机器人的人工心理模型的研究.北京科技大学博士论文, 2006: 78~90
    51 Cynthia Breazeal. Function Meets Style: Insights from Emotion Theory Applied to HRI. IEEE Transactions on Systems, Man and Cybernetics, 2004:187~194
    52何良华.人脸表情识别中若干关键技术的研究.东南大学博士论文. 2005: 89~120
    53 M. Pantic, A. Pentland, A. Nijholt&T. Huang. Human Computing and Machine Understanding of Human Behavior: a Survey. Proceedings Eighth International Conference on Multimodal Interfaces (ACM ICMI 2006), Banff, Canada, 2~4 November 2006: 239~248.
    54 B.Fasel, Juergen Luettin. Automatic Facial Expression Analysis: a Survey. Pattern Recognition. 2003, (36):259~275
    55 Gwen Littleword, Marian Stewart Bartlett, Ian Fasel, Joshua Susskind, Javier Movellan. Dynamics of Facial Expression Extracted Automatically from Video. Image and Vision Computing. 2006, (24): 615~625
    56刘晓旻 ,谭华春.人脸表情识别研究的新进展.中国图像图形学报. 2006,11(10): 1359~1368
    57 J. Xiao, T. Moriyama, T. Kanade, J. Cohn. Robust Full Motion Recovery of Head for Facial Expression Analysis. International Journal of Imaging Systems and Technology. 2003, (13): 85~94.
    58任金霞,杨国亮.基于Gabor和ADABOOST的面部表情识别.微计算机信息. 2007,23(3-1):290~292
    59 Chao-Fa Chuang, Frank Shin. Recognizing Facial Action Units using Independent Component Analysis and Support Vector Machine. PatternRecognition. 2006, (39): 1795~1798
    60缪少军,张建明.基于小波变换和独立分量分析的面部表情识别.计算机工程与应用. 2008,44(10):188~190
    61张一鸣,欧宗瑛,王虹.基于C均值K近邻算法的面部表情识别.智能系统学报. 2008,3(1): 57~61
    62 Jin Yi, Ruan Qiuqi. A Part Weighted Two-Dimensional PCA for Face Recognition. Chinese Association for Artificial Intelligence Transactions on Intelligent Systems. 2007, 2(3):25~29
    63 Lien J J J, Kanade T, Cohn J F. Detection, Ttracking, and Classification of Action Units in Facial Expression. Robotics and Autonomous Systems. 2000,(31):131~146
    64叶敬福,詹永照.基于Gabor小波变换的人脸表情特征提取.计算机工程. 2005,31(15):172~174
    65崔洁.面部表情识别的方法研究.西北工业大学硕士论文. 2006: 31~50
    66 T.Ahonen, A.Hdaid, M.Pietikinne. Face Recongitino with Local Pattens. 8th European Conefrence on Computer Vision. 2004: 469~481
    67刘伟锋.人脸表情识别方法.中国科学技术大学博士论文. 2007:54~70
    68 M. Bartlett, B. Braathen, J. Hershey, I. Fasel, T.Marks, E. Smith, T. Sejnowski, J. R. Movellan. Automatic Analysis of Spontaneous Facial Behavior: a Final Project Report. Technical Report . Machine Perception Lab, Institute for Neural Computation, University of California, San Diego, 2001.
    69 M. J Jones, JM Rehg. Statistical Color Models with Application Toskin Detection. International Journal of Computer Vision. 2002, (46):81~96
    70汤丽君.人脸面部表情识别方法研究.湖南大学硕士论文. 2006:54~70
    71 M.barkery, Corathen.G.A Probabilistic Framework for Embedded Face and Facial Expression Recognition. IEEE Conference on Computer Vision and Pattern Recognition. LosAlamitos, CA, 1999: 152~160
    72 Limin Ma, David Chelberg, MehmetCelenk. Spatio-Temporal Modeling of Facial Expression Using Gabor-Wavelets and Hierarehieal Hidden Markov Models. IEEE Conference on Image Processing, 2005:57~60
    73 Iracohen, Nicusebe, Yafeisun Michaels, Lew. Evaluation of Expression Recognition Techniques. Intemational Conference on Image and Video Retriaval,USA, 2003:184~195
    74 Marceko.C, Thomas. Learning Bayesian Network Classifiers for Facial Expression Recognition using Both Labeled and Unlabeled Data. IEEE Conference on Computer Vision and Pattern Recognition(CVPR' 03), USA, June 2003:595~601
    75 L.Ma, K.Khorasani. Facial Expression Recognition using Constructive Feed forward Network. IEEE Transaction on System, Man and Cybernetics , JUNE, 2004:1588~159
    76刘强.人工神经网络方法在人脸检测和数据挖掘中的应用.电子科技大学硕士论文. 2005:56~80
    77 Dimittrios Ververidis, Constantine Kotropoulos. Emotional Speech Recognition: Resources, Features and Methods. Speech Communications. 2006, (48):1162~1181.
    78 Lee, C.M, Narayanan. Toward Detecting Emotions in Spoken Dialogs. IEEE trans. Speech Audio process. 2005,13(2):293~303
    79 Park C.H, Sim K.B. Emotion Recognition of Speech Based on FNN. International Joint Conferernce of Neural Networks. 2003: 2594~2598
    80谢波.普通话语音情感识别关键技术研究.浙江大学博士论文. 2006: 47~60
    81 Schuller B, Rigoll G, Lang M. Hidden Markov Model Based Speech Emotion Recognition. IEEE International Conference on Acoustics, Speech and Signal Processing, 2003: 1~4
    82董力赓,陶琳密,徐光祐.头部姿态和动作的识别与理解.第三届和谐人机环境联合学术会议集.山东,中国, 2007:172~181
    83刘坤,罗予频,杨士元.光照变化情况下的静态头部姿态估计.计算机工程.2008,5(10):16~18
    84 DePoliG, MionL, VidolinA, etal. Analysis of Expressive Musical Gestures in Known Pieces and in Improvisations. Gesture Based Communication in Human Computer Interaction.Springer Verlag, 2004:78~86
    85 Pollick F E. The Features People Use to Recognize Human Movement Style Gesture. Communication in Human Computer Interaction. 2004:56~70
    86 S.H. Or, W.S. Luk, K.H, Wong I.King. An Efficient Iterative Pose Estimation Algorithm. Image and Vision Computing. 1998,(16): 353~362
    87 Kenji Oka, Yoichi Sato. Real-time Modeling of Face Deformation for 3D Head Pose Estimation. IEEE International Workshop on Analysis and Modeling of Faces and Gestures, Nice, France,2005: 308~320
    88 M.La Cascia, S.Sclaroff, V.Athitsos. Fast Reliable Head Tracking under Varying Illumination: An Approach Based on Registration of Texture Mapped
    3D Models. IEEE Transactions. on Pattern Analysis and Machine Intelligence, 2000: 322~336
    89 T.Darrell, B.Moghaddam, A.P.Pentland. Active Face Tracking and Pose Estimation in an Interactive Room. M.I.T Media Laboratory PerceptualComputing Group Technical Report, 1996
    90 S.Gong, S.Mckenna, J.Collins. An Investigation into Face Pose Distributions. IEEE International Conference on Automatic Face and Gesture Recognition, Vermont, USA, 1996: 265~270
    91王志良.人工心理.机械工业出版社. 2007: 34~60
    92 Minsky M. The Society of Mind. New York: Simonand Schuster, 1986: 145~162
    93日本文部省重点研究领域.情感信处理的信息学、心理学研究.成果报告书, 1999
    94原文雄,小林宏.颜という知能:颜ロボットによる[人工感情]の创発.共立出版社.2004: 57~59
    95 H.Schlosberg. A Scale for Judgement of Facial Expressions. Journal of Experimental Psychology, 1954: 497~510,
    96孟兆兰.人类情绪心理学.上海人民出版社. 1989: 135~143
    97 H. J. Eysenck. Biological Dimensions of Personality. New York: Guilford, 1990:244~276
    98 Cowie.R, Douglas Cowie, E.Savvidou, S. McMahon, E. Sawey.‘Feeltrace’: An Instrument for Recording Perceived Emotion in Real Time. International Symposium on Computer Architecture Workshop on Speech and Emotion: A Conceptual Framework for Research, Belfast,Taxflow, 2000: 127~133
    99 L. Kovar, M. Gleicher, F. Pighin. Motion Graphs. ACM Press, 2002: 473~482.
    100 H. J. Eysenck. Biological Dimensions of Personality. New York: Guilford, 1990:244~276
    101 R. B. Cattell, H. Eber, M. M. Tatsuoka. Handbook for the Sixteen Personality Factor Questionnaire (16PF). Institute for Personality and Ability Testing, Champaign, IL, 1970:153~160
    102 S. Hampson. State of the Art: Personality. British Psychological Society. 1999,12(6):284~290
    103 WANG Shang-fei, WANG Xu-fa. An Artificial Emotion Model Based on the Dimension Idea. Journal of University of Science and Technology of China. 2004,(2):83~91
    104 Colette Faucher. Integrating Emotion and Cognition in Formal Models. http://perso.wanadoo.fr/colette.faucher/emotion.htm
    105 Daria Bartenevaa, Nuno Laub, Luis Paulo Reisc. Implemention of Emotional Behabious in Multi-Agent System Using Fuzzy Logic and Temperamental Decision Mechanism. Acadmic Press. 2005:45~87
    106 Chen Yi-qiang, Gao Wen and Wang Zhao-qi. Multi-Modal Speech Synthesis.The 6th National Conference on Human Machine Speech Communication. Shenzhen, China, 2001: 163~168.
    107晏洁.文本驱动的唇动合成系统.计算机工程与设计. 1998, 02(08):45~50
    108王志明,蔡莲红.汉语语音视位的研究.应用声学.2002,21(03): 29~34
    109龙辉平,习胜丰,侯新华.实验数据的最小二乘拟合算法与分析.计算技术与自动化.2008, 09(15): 20~23
    110陈有棋.形式语言与状态机.机械工业出版社.2008:56~80
    111陈文宇,欧齐,程炼.形式语言与状态机.人民邮电出版社.2005:92~112
    112洪晓蕾.模糊有限状态机及其最小化问题.四川师范大学硕士论文. 2007:43~78
    113年晓玲.基于扩展有限状态机软件测试自动生成的研究.西南交通大学硕士论文. 2005: 76~98
    114 M.Lewis, J.M, Haviland-Jones. Handbook of the Emotions. The Guilford Press. 2000:100~132
    115 R.W.Levenson, A.M. Physiological aspects of Emotional Knowledge and Repport, The Guilford Press. 1997:44~72
    116 B.Wild, M.Erb, M.Barteles. Are Emotions Contagious? Evoked Emotions While Viewing Emotionally Expressivw Faces: Quality, Quantity, Time Course and Gender Differences. Psychiatry Press. 2005: 109~124
    117 Georgia Panayiotou. Emotional Dimensions Reflected in Ratings of Affective Scripts. Personality and Indvidual Differences. 2008(44):1795~1806
    118 Barrentt, L.F. Feelings or Words? Understanding the Content in Self-Report Rating of Experienced Emotion. Journal of Personality and Social Psychology. 2004:266~281
    119 Gary D.Hachtel, Enrico Macii, Abelardo Pardo, Fabio Somenzi. Markovian Analysis of Large Finite State Manchines. IEEE Transaction on Computer-Aided Design of Intergated Circuits and System, 1996:1479~1493
    120樊平毅.随机过程理论与应用.清华大学出版社.2005: 154~193
    121黄永安,马路,刘慧敏. MATLAB7.0/Simulink6.0建模仿真开发与高级工程应用.清华大学出版社. 2007:236~262
    122贾秋玲,袁冬玲,栾云凤. MATLAB7.X/Simulink/Stateflow系统仿真、分析及设计.西北工业大学出版社.2006:176~226
    123张威. Stateflow逻辑系统建模.西安电子科技大学出版社. 2007:30~295
    124 Rafael C. Gonzalez,阮秋琦.数字图像处理.电子工业出版社. 2007: 56~89
    125 N. Kanwisher, E. Wojciulik. Visual Attention: Insights from Brain Imaging. Journal of Neurosci. 2000, 1:91~100
    126王磊.人脸表情自动提取与跟踪技术研究.湖南大学博士论文. 2007: 32~50
    127伍世虔,徐军.动态模糊神经网络——设计与应用.清华大学出版社. 2007: 27~37
    128 Yi-Hung, Liu, Szu-Hsien Lin, Yi-Ling Hsueh, Ming-Jiu Lee. Automatic Target Defect Identification for TFT-LCD Array Process Inspection using Kernel FCM-Based Fuzzy SVDD Ensemble. Expert Systems with Application 2009(36): 1978~1998
    129张磊,朱斌,南立军.基于模糊均值聚类算法的图像分割.装甲兵工程学报. 2006, (20)4: 68~70
    130韩记庆,张磊,郑铁然.语音信号处理.清华大学出版社. 2004:43~51
    131刘幺和,宋庭新.语音识别与控制应用技术.科技出版社. 2007:12~40
    132 Davis, S.B., Mermelstein, P. Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences. IEEE Trans. Acoust. Speech Signal Process. 2005,28 (4), 357~366
    133 Peter Jancovic, Munevver Kokuer. Incorporating the Voicing Information into HMM-based Atutomatic Speech Recogniton in Noisy Environments. Speech Communication. 2009, (51):438~451
    134 Jun Cai, Ghazi Bouselmi, Yves Laprie , Jean-Paul Haton. Efficient Likelihood Evaluation and Dynamic Gaussian Selection for HMM-based Speech Recognition. Computer Speech and language. 2009,(23):147~164
    135 Serajul Haque, Roberto Togneri, Anthony Zaknich. Perceptual Features for Automatic Speech Recognition in Noisy Environments. Speech Communication. 2009,(51) :58~75
    136陈立伟.基于HMM和ANN的汉语语音识别.哈尔滨工程大学博士论文. 2005:40~60
    137丁自海.人体解剖学.中国协和医院出版社. 2007:4~14
    138王志明,蔡莲红.动态视位模型及其参数估计.软件学报. 2003.14(03): 461~466
    139苏岐芳.数值分析.中国铁道出版社. 2007: 45~63
    140 Mataric M J. Behavior-based Robotics as a Tool for Synthesis of Artificial Behavior and Analysis of Natural Behavior. Journal of Experimental and Theoretical Artifcial Intelligence, 1998,9(2/3):82~86.
    141 Ronald C A. Behavior-based Robotics. The MIT Press. 1998: 45~63
    142赵定远,马洪江. 16位单片机及语音嵌入式系统.中国水利水电出版社. 2006: 9~57
    143凌阳科技.凌阳16位单片机开发实例.北京航天航空大学出版社. 2006: 52~59
    144雷思孝,李伯成.单片机原理及实用技术.西安电子科技大学出版社. 2005:45~60

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700