基于协同交互的表情识别和情感体验建模方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着智能计算的发展,人们逐渐习惯于通过人机交互解决社会生活中遇到的问题。计算智能体如果可以了解和掌握人类内在的情感体验,就可以提高人机交互的合理性和协同性,实现生动、无缝的交互。考虑到情感计算与现实研究的复杂性,本文以面部表情的实时分析研究为基础,探讨被观测主体的外在面部表情表达与内在情感体验之间的联系机制,取得了以下主要研究成果:
     提出基于张量的坐标空间变换方法,将面部特征矢量从像素空间转换到参数空间,统一矢量维度,降低数据处理的复杂度,提高实时分析的效率和准确率。理论证明了进行张量空间转换的合理性,实现了面部特征矢量的几何和物理特性在空间变换的过程中保持不变。实验验证了该方法可以有效的提高面部表情识别的精度和实时分析的效率。
     从智能认知的角度,定义组织环境和个体面部表情分析模型。组织环境中的参与主体受组织规范约束,增强了分析研究的可操作性和可信性;个体模型对组织环境中特定的主体进行面部表情识别分析,通过模型间的协同交互迭代传递主体的面部表情聚类结构,完善模型自身的分析能力。由协同交互改进的面部表情识别算法收敛更快,并给出全局优化的面部表情聚类分布。
     在个体模型的基础上,建立面部表情的贝叶斯网络,实现对参与主体面部表情的动因分析和预测推理。引入协同交互改进面部表情网络的结构学习算法,提高了面部表情网络的拓扑结构质量。在构建的面部表情网络中,可以实时分析影响主体面部表情表达的环境动因,以及预测主体下一时刻最有可能表达的面部表情状态。该方法为通过面部表情分析情感体验做了基础准备。
     基于上述研究,设计了通过面部表情研究情感体验一致性分布的分析模型。该模型用以发现主体是否隐藏了应该表达的情感特征,其面部表情表达与内在的情感体验是否一致。模型将面部表情特征和面部表情状态定义为情感特征证据,结合协同依赖的评估算法实现对情感体验的协调度一致性分析,给出主体的情感体验分布图。该分布图是最具“个性化的”面部表情分析和情感计算认知结果。
     综上所述,该研究工作在较高层次上对人类外在的面部表情表达和内在的情感体验状态进行了深入的探索,并将基于面部表情分析的情感计算应用到人机交互中,提高智能计算机的“情感”认知能力,是建立和谐人机交互的基础。
With the rapid development of intelligent computing, humans are accustomed to solve problems with Human-Computer Interaction (HCI). It is an important issue to understand and master human’s internal affective experience, because it can improve the rationality and collaboration of HCI and achieve the vivid and seamless interaction. However, considering the complexity of affective computing, we study the inner mechanism between the external facial expression and the internal affective experience based on real-time analysis of facial expressions, and achieve the meaningful results as follows:
     In this paper, a novel approach to facial expression analysis in parameter space with metric tensor is presented with a summary of methodologies. Based on metric tensor and its differential operators, the facial features are transformed to construct unified formalizations from the image pixel space to the parameter space with their geometric and physical characteristics preservation. Besides, three groups of experiments are conducted on standard facial expression databases and on real-time analysis to evaluate the effect of parameter space. It is suggested that the parameter space has the characteristics to lower the data formalizations demands and to improve the precision of facial expression recognition, and it can perform real-time facial expression analysis with distinction.
     From the view of cognitive intelligence, organizational environment and person-independent model for facial expression analysis are proposed. Since the participants are bounded by organizational norms, it enhances the operability and reliability of analysis. A person-independent model is responsible for one participant in the organizational environment, and it can improve the analytical skills through the models’cooperative interactions and the iterative transmission for the cluster structure of facial expressions. With the cooperative interactions, the person-independent model can improve the algorithm convergence faster and give the global optimal clustering distribution of facial expressions.
     According to person-independent model and Bayesian networks, facial expression networks are proposed to achieve the environment index factors analysis and predicting inference. Also, cooperative interactions are conducted for the structure learning of facial expression networks, and the new algorithm can improve the quality of the networks topology. Within the facial expression networks, not only can the main environmental factor causes of facial expression be analyzed, but also the model can predict what kind of facial expression would be most likely expressed next time. With such a framework of facial expression networks, the researchers can focus on the affective experiences modeling on the basis of facial expression.
     Based on the above studies, we design an affective experience model to analyze the coherence distribution. The model can analyze whether the participant hides its emotion or whether the facial expression and affective experience are of inner consistency. As the person-independent model can easily collect the facial features and effectively achieve the facial expression recognition and prediction, facial features and facial states are defined as affective characteristics evidences. As well as the algorithms of collaborative reliance evaluation, the model can analyze the affective experience and represent the coherence distribution diagram in space. With such a manner, the distribution is considered as the most individual intelligent ability for facial expression analysis and affective computing.
     In summary, this paper studies how to analyze the internal affective experience through the external facial expression based on which the affective computing can be improved and perfected. This paper also provides the basic significance for harmonious HCI.
引文
[1]J. McCarthy. Artificial intelligence, logic, and formalizing common sense. Philosophical Logic and Artificial Intelligence, Dordrecht: Kluwer Academic Publishers, 1989. 161~190
    [2]S. TheOdoridis and K. Koutroumbas. Pattern recognition (third edition). USA: Academic Press, 2006. 1~8
    [3]M. Weiser, R. Gold and J.S. Brown. The origins of ubiquitous computing research at PARC in the late 1980s. IBM Systems Journal, USA, 38(4), 1999. 693~696
    [4]D. Tennenhouse. Proactive computing. Communications of the ACM, 43(5), 2000. 43~50
    [5]Luis M. Vaquero, Luis Rodero-Merino, Juan Caceres and et al. A break in the clouds: toward a cloud definition. ACM SIGCOMM Computer Communication Review, 39(1), 2009. 50~55
    [6]Alan Dix, Janet Finlay, Gregory Abowd and et al. Human-computer interaction (3rd edition). New Jersey: Prentice Hall Press, 2003.
    [7]R.W. Picard. Affective computing. London: Massachusetts Institute of Technology Press, 1997. 1~83
    [8]A. Mehrabian. Communication without words. Psychology Today, 2(4), 1968. 53~56
    [9]R.J. Jain, R. Kasturi and B.G. Schunck. Machine vision. New York: McGraw-Hill, 1995. 1~4
    [10]王志良,孟秀艳.人脸工程学.北京:机械工业出版社,2008. 1~42
    [11]付晓玲,董博生.人脸识别技术在远程身份验证中的应用.微计算机信息.3(3), 2009. 86~88
    [12]S. Kar, S. Hiremath, D.G. Joshi and et al. A multi-algorithmic face recognition system. Proceedings of the 14th Advanced Computing and Communications, Surathkal, India, 2006. 321~326
    [13]R. Chellappa, C.L. Wilson and S. Sirohey. Human and machine recognition of faces: a survey. Proceedings of the IEEE, 83(5), 1995. 705~741
    [14]W. Bledsoe. Man-machine facial recognition. Panoramic Research Inc. Palo Alto, CA, Rep. PRI: 22, 1966.
    [15]G.Z. Yang and T.S. Huang. Human face detection in a complex background. Pattern Recognition, 27(1), 1994. 53~63
    [16]Y. Dai and Y. Nakano. Face-texture model based on SGLD and its application in face detection in a color scene. Pattern Recognition, 29(6), 1996. 1007~1017
    [17]C.H. Lee, J.S. Kim and K.H. Park. Automatic human face location in a complex background using motion and color information. Pattern Recognition, 29(11), 1996. 1877~1889
    [18]M. Kapfer and J. Benois-Pineau. Detection of human faces in color image sequences with arbitrary motions for very low bit-rate videophone coding. Pattern Recognition Letter, 18(14), 1997. 1503~1518
    [19]R. Brunelli and T. Poggio. Face recognition: feature versus templates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(10), 1993. 1042~1052
    [20]M. Lades and J.C. Vorgrubben. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computer, 42(3), 1993. 300~310
    [21]C. Nastar, B. Moghaddam and A. Pentland. Flexible images: matching and recognition using learned deformations. Computer Vision and Image Understanding, 65(2), 1997. 179~191
    [22]R.J. Baron. Mechanisms of human facial recognition. Man Machine Studies, 15, 1981. 137~178
    [23]T.K. Kim, S. Kee and S.R. Kim. Real-time normalization and feature extraction of 3D face data using curvature characteristics. Proceedings of the 10th IEEE International Workshop on Robot and Human Communication, Paris, France, 2001. 74~79
    [24]G. Chow and X. Li. Towards a system for automatic facial feature detection. Pattern Recognition, 26(12), 1993. 1739~1755
    [25]C.L. Humg and C.W. Chen. Human facial feature extraction for face interpretation. Pattern Recognition, 25(12), 1992. 1435~1444
    [26]A. Nikolaidis and I. Pitas. Facial feature extraction and pose determination. Pattern Recognition, 33(11), 2000. 1783~1791
    [27]P.C. Yuwn and J.H. Lai. Face representation using independent component analysis. Pattern Recognition, 35(6), 2002. 1247~1257
    [28]P.N. Belhumeur, J.P. Hespanha and D.J. Kriegman. Eigenfaces vs fisherfaces recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 1997. 711~720
    [29]M.S. Bartlett and T.J. Sejnowski. Independent components of face images: a representation for face recognition. Proceedings of the 4th Annual Joint Symposium on Neural Computation, Pasadena, USA, 1997. 3~10
    [30]B. Moghaddam, T. Jebara and A. Pentland. Bayesian face recognition. Pattern Recognition, 33(11), 2000. 1771~1782
    [31]K. Jonsson, J. Kittler, Y.P. Li and et al. Support vector machines for face authentication. Imaging and Vision Computing, 20(5~6), 2002. 369~375
    [32]F. Samaria and S. Young. HMM based architecture for face identification. Image and Vision Computer, 12, 1994. 537~583
    [33]M. Mazloom and S. Ayat. Combinational method for face recognition: wavelet, PCA and ANN. Proceedings of Digital Image Computing: Techniques and Applications, Canberra, ACT, 2008. 90~95
    [34]C. Nastar and N. Ayache. Frequency-based non-rigid motion analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(11), 1996. 1067~1079
    [35]J.H. Lai, P.C. Yuen and G.C. Feng. Face recognition using holistic Fourier invariant features. Pattern Recognition, 34(1), 2001. 95~109
    [36]C. Darwin. The expression of the emotions in man and animals. London: John Murray, 1872. 147~177
    [37]C. Bell. The anatomy and philosophy of expression: as connected with the fine arts (5th edition). London: Henry G. Bohn, 1865. 1~3
    [38]G. Duchenne. Mécanisme de la physionomie humaine. Paris: Jules Renouard Libraire, 1862. 51~53
    [39]H. Spencer. The principles of psychology. London: Longmans, 1855. 584~585
    [40]H. A. Simon. Motivational and emotional controls of cognition Export. Psychological Review, 74(1), 1967. 29~39
    [41]C. E. Izard. The face of emotion. New York: Appleton-Century-Crofts. 1971.
    [42]S.S. Tomkins. Affect, imagery, consciousness, volume 2: the negative affects. New York: Springer, 1963.
    [43]P. Ekman. Universals and cultural differences in facial expression of emotion. Nebraska Symposium on Motivation, 19, 1972. 207~283
    [44]P. Ekman. An argument for basic emotions. Cognition and Emotion, 6(3~4), 1992. 169~200
    [45]A.I. Goldman and C.S. Sripada. Simulationist models of face-based emotion recognition. Cognition, 94(3), 2005. 193~221
    [46]J.A. Russell, J.A. Bachorowski and J.M. Fernandez-Dols. Facial and vocal expressions of emotion. Annual Review of Psychology, 54, 2003. 329~349
    [47]K.R. Scherer. What does facial expression express? International review of studies on emotion. Chichester: Wiley, 2, 1992. 139~165
    [48]C.A. Smith and H.S. Scott. A componential approach to the meaning of facial expressions. The psychology of facial expression. New York: Cambridge University Press, 1997. 229~254
    [49]K.R. Scherer, A. Schorr and T. Johnstone. Appraisal processes in emotion: theory, methods, research. New York: Oxford University Press, 2001.
    [50]D. Sander, D. Grandjean and K.R. Scherer. A systems approach to appraisal mechanisms in emotion. Neural Networks, 18, 2005. 317~352
    [51]L.K. Pope and C.A. Smith. On the distinct meanings of smiles and frowns. Cognition and Emotion, 8, 1994. 65~72
    [52]C.A. Smith. Dimensions of appraisal and physiological response in emotion. Personality and Social Psychology, 56, 1989. 339~353
    [53]G. Duchenne. Me′canisme de la physionomie humaine: ou, analyse e′lectrophysiologique de l’expression des passions [The mechanism of human facial expression]. Cambridge: Cambridge University Press, 1990.
    [54]H. Kobayashi and F. Hara. The recognition of basic facial expressions by neural network. Proceedings of International Joint Conference on Neural Network, Seattle, USA, 1991. 460~466
    [55]M. Suwa, N. Sugie and K. Fujimora. A preliminary note on pattern recognition of human emotional expression. Proceedings of International Joint Conference on Pattern Recognition, Kyoto, Japan, 1978. 408~410
    [56]I.A. Essa and A. Pentland. A vision system for observing and extracting facial action parameters. Proceedings of Computer Vision and Pattern Recognition, Seattle, USA, 1994. 76~83
    [57]K. Mase. Recognition of facial expression from optical flow. IEICE Transactions, E.74(10), 1991. 3474~3483
    [58]M. Pantic and L. Rothkrantz. Facial action recognition for facial expression analysis from static face images. IEEE Transactions on Systems, Man and Cybemetics—Part B, 34(3), 2004. 1449~1461
    [59]M. Pantic and L. Rothkrantz. Expert system for automatic analysis of facial expression. Image Vision Computing, 18(11), 2000. 881~905
    [60]P. Ekman and W. Friesen. Facial action coding system. California: Sonsulting Psychologists Press. 1997
    [61]Y. Yacoob and L. Davis. Computing spatio-temporal representations of human faces. Proceedings of Computer Vision and Pattern Recognition, Seattle, USA, 1994. 70~75
    [62]R. Feris, Y.L. Tian, Y. Zhai and et al. Facial image analysis using local feature adaptation prior to learning. Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, Netherlands, 2008. 1~6
    [63]Z. Wen and T. Huang. Capturing subtle facial motions in 3D face tracking. Proceedings of IEEE International Conference on Computer Vision, Vol. 2, Nice, France, 2003. 1343~1350
    [64]Y. Shinohara and N. Otsu. Facial expression recognition using fisher weight maps. Proceedings of IEEE Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 2004. 499~504
    [65]X. Feng. Facial expression recognition based on local binary patterns and coarse-to-fine classification. Proceedings of International Conference on Computer and Information Technology, Wuhan, China, 2004. 178~183
    [66]C. Xu and Z.Y. Feng. Facial expression recognition and synthesis on affective emotions composition. Proceedings of 2008 International Seminar on Future BioMedical Information Engineering, Wuhan, China, 2008. 429~432
    [67]B. Fasela and J. Luettinb. Automatic facial expression analysis: a survey. Pattern Recognition, 36(1), 2003. 259~275
    [68]X. Chen and T. Huang. Facial expression recognition: a clustering based approach. Pattern Recognition Letters, 24(9~10), 2003. 1295~1302
    [69]S. Mitra and Y. Liu. Local facial asymmetry for expression classification. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, Washington DC, USA, 2004. 889~894
    [70]S. Duhuisson, F. Devoine and M. Masson. A solution for facial expression representation and recognition. Signal Processing: Image Communication, 17(9), 2002. 657~673
    [71]L. Zalewski and S. Gong. Synthesis and recognition of facial expressions in virtual 3D views. Proceedings of IEEE 6th International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 2004. 493~498
    [72]T.F. Cootes, G.J. Edwards and C.J. Taylor. Active appearance models. Proceedings of the 5th European Conference on Computer Vision, Freiburg, Germany, 1998. 484~498
    [73]H. Wang and N. Ahuja. Facial expression decomposition. Proceedings of IEEE International Conference on Computer Vision, vol. 2, Nice, France, 2003. 958~965
    [74]B. Abboud and F. Davoine. Appearance factorization based facial expression recognition and synthesis. Proceedings of International Conference on Pattern Recognition, Cambridge, UK, 2004. 163~166
    [75]Y. Dai, Y. Shibata, K. Hashimoto and et al. Facial expression recognition of person without language ability based on the optical flow histogram. Proceedings of the 5th International Conference on Signal Processing, Beijing, China, 2000. 1209~1212
    [76]H. Wang and N. Ahuja. Facial expression decomposition. Proceedings of IEEE International Conference on Computer Vision, Nice, France, 2003. 958~965
    [77]J. Lien. Automatic recognition of facial expression using hidden markov models and estimation of expression intensity. [博士学位论文] Pittsburgh: The Robotics Institute, CMU, 1998
    [78]G.W. Cottrell and M. Fleming. Categorization of faces using unsupervised feature extraction. Proceedings of International Neutral Networks Conference, Paris, France, 1990. 65~70
    [79]M. Lyons. The Japanese Female Facial Expression (JAFFE) Database [DB], http://www.kasrl.org/jaffe.html, 1998.
    [80]M. Lyons, J. Budynek and S. Akamastu. Automatic classification of single facial images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(12), 1999. 1357~1362
    [81]T. Sim, S. Baker and M. Bsat. The CMU pose, illumination, and expression (PIE) database. Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition, Washington, USA, 2002. 46~51
    [82]N.B. Peter, P.H. Joao and J.K. David. Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 (7), 1997. 711~720
    [83]P.J. Phillips, H. Moon, S.A. Rizvi and et al. The FERET evaluation methodology for face recognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(10), 2000. 1090~1104
    [84]F.S. Samaria and A.C. Harter. Parameterization of a stochastic model for human face identification. Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 1994. 138~142
    [85]B. Weyrauch, J. Huang, B. Heisele and et al. Component-based face recognition with 3D morphable models. Proceedings of the First IEEE Workshop on Face Processing in Video, Washington DC, USA, 2004. 85~85
    [86]张晓华,山世光,曹波等.CAS-PEAL大规模中国人脸图像数据库及其基本评测介绍.计算机辅助设计与图形学学报,17(1), 2005. 9~17
    [87]S. Leppinen. Research council for health academy of Finland. Research programme on health services research 2004-2007 programme memorandum. http://www.aka.fi/. Research Council for Health Academy of Finland. 2007.
    [88]K. Niemi. Research programme on environmental, societal and health effects of genetically modified organisms 2004-2007 ESGEMO. http://www.aka.fi/. Academy of Finland. 2007.
    [89]G.D. Guo and C.R. Dyer. Learning from examples in the small sample case: face expression recognition. IEEE Transactions on System, Man and Cybernetics-Part B, Special Issue on Learning in Computer Vision and Pattern Recognition, 35(3), 2005. 477~488
    [90]M. Minsky. The society of mind. New York: Simon & Schuster, 1985.
    [91]胡包钢,谭铁牛,王珏.情感计算——计算机科技发展的新课题.北京:科学时报.第三版,2000年3月24日.
    [92]D. Goleman. Emotional intelligence. New York: Bantam Books, 1995.
    [93]R.W. Picard. Affective computing. Perceptual Computing Technical Report No. 321. MIT Media Lab., 1995.
    [94]J. Kagan. Galen's prophecy: temperament in human nature. New York: Basic Books. 1994.
    [95]S. Wewerka, K. Miller and J.M.R. Doman. Together forever. Life Magazine, 4, 1996. 46~56
    [96]R.E. Cytowic. The man who tasted shapes. New York: G.P. Putnam's Sons, 1993.
    [97]K.P. Wu and S.D. Wang. Choosing the kernel parameters for support vector machines by the inter-cluster distance in the feature space. Pattern Recognition, 42(5), 2009. 710~717
    [98]P.M. Lapsa. Efficient multidimensional transformation from data space to parameter space. Pattern Recognition Letters, 13(1), 1992. 63~72
    [99]J.Q. Song and M.R. Lyu. A Hough transform based line recognition method utilizing both parameter space and image space. Pattern Recognition, 38(4), 2005. 539~552
    [100]E.R. Davies. Truncating the Hough transform parameter space can be beneficial. Pattern Recognition Letters, 24(1~3), 2003. 129~135
    [101]C.F. Olson. Locating geometric primitives by pruning the parameter space. Pattern Recognition, 34(6), 2001. 1247~1256
    [102]S. Sarkar and S. Chavali. Modeling parameter space behavior of vision systems using bayesian networks. Computer Vision and Image Understanding, 79(2), 2000. 185~223
    [103]P.K. Ser, C.S.T. Choy and W.C. Siu. Genetic algorithm for the extraction of nonanalytic objects from multiple dimensional parameter space. Computer Vision and Image Understanding, 73(1), 1999. 1~13
    [104]A.S. Aguado, M.E. Montiel and M.S. Nixon. On using directional information for parameter space decomposition in ellipse detection. Pattern Recognition, 29(3), 1996. 369~381
    [105]K. Andresen and Q.F. Yu. Calculation of the geometric parameters of an ellipse in space by its edges in the images. Photogrammetry and Remote Sensing, 49(2), 1994. 33~37
    [106]D. Pao, H.F. Li and R. Jayakumar. A decomposable parameter space for the detection of ellipses. Pattern Recognition Letters, 14(12), 1993. 951~958
    [107]G.L. Foresti, C.S. Regazzoni and G. Vernazza. Circular arc extraction by direct clustering in a 3D Hough parameter space. Signal Processing, 41(2), 1995. 203~224
    [108]A.D.H. Thomas. Compressing the parameter space of the generalised Hough transform. Pattern Recognition Letters, 13(2), 1992. 107~112
    [109]J.S. Abolhassani and J.E. Stewart. Surface grid generation in a parameter space. Computational Physics, 113(1), 1994. 112~121
    [110]T. Theodoropoulos and G.C. Bergeles. A laplacian equation method for numerical generation of boundary-fitted 3D orthogonal grids. Computational Physics, 82(2), 1989. 269~288
    [111]E. Nelson. Tensor analysis. New Jersey: Princeton University Press and the University of Tokyo Press, 1967.
    [112]李开泰,黄艾香.张量分析及其应用.西安:西安交通大学出版社,1984. 1~40
    [113]W.Z. Huang. Metric tensors for anisotropic mesh generation. Computational Physics, 204(2), 2005. 633~665
    [114]S.H. Lo. 3D anisotropic mesh refinement in compliance with a general metric specification. Finite Elements in Analysis and Design, 38(1), 2001. 3~19
    [115]W. Zhang, Z.C. Lin and X.O. Tang. Tensor linear Laplacian discrimination (TLLD) for feature extraction. Pattern Recognition, 42(9), 2009. 1941~1948
    [116]E. Fredericks, F.M. Mahomed, E. Momoniat and et al. Constructing a space from the geodesic equations. Computer Physics Communications, 179(6), 2008. 438~442
    [117]H.D. Li and R. Hartley. Inverse tensor transfer with applications to novel view synthesis and multi-baseline stereo. Signal Processing: Image Communication, 21(9), 2006. 724~738
    [118]S. Mahmoodi and B.S. Sharif. Nonlinear optimisation method for image segmentation and noise reduction using geometrical intrinsic properties. Image and Vision Computing, 24(2), 2006. 202~209
    [119]J.F. Thompson. Body-fitted coordinate system for numerical solution of partial differential equations. Computational Physics, 47(2), 1982. 1~108
    [120]梁宁建.当代认知心理学.上海:上海教育出版社,2003. 276~280
    [121]A. Oulasvirta and A. Salovaara. A cognitive meta-analysis of design approaches to interruptions in intelligent environments. Proceedings of International Conference for Human-Computer Interaction, Vienna, Austria, 2004. 1155~1158
    [122]C. Xu and Z.Y. Feng. An affective modeling approach to interruptions in proactive computing environments. Proceedings of the 12th International Conference on Human-Computer Interaction, Beijing, China, 2007. 628~632
    [123]H. Haken. Synergetics-an introduction. New York: Springer, 1977.
    [124]曾健,张一方.社会协同学.北京:科学出版社,2000. 144~155
    [125]H.哈肯.协同学和信息:当前情况和未来展望,熵、信息与交叉科学-迈向21世纪的探索和运用.云南:云南大学出版社,1994. 1~42
    [126]P. Saloey and J.D. Mayer. Emotional intelligence. Imagination, Cognition and Personality, 9(3), 1990. 185~211
    [127]周祖德,盛步云.数字化协同与网络交互设计.北京:科学出版社,2005. 272~294
    [128]朱新明,李亦菲,朱丹.人类的自适应学习——示例学习的理论与实践.北京:中央广播电视大学出版社,1997. 43~51
    [129]J. Whitehill, M. Bartlett and J. Movellan. Automatic facial expression recognition for intelligent tutoring systems. Proceedings of 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, AK, USA, 2008. 1~6
    [130]S. Russell and P. Norvig.人工智能—一种现代方法(第二版)(姜哲等译).北京:人民邮电出版社,2004. 378~387
    [131]D. Heckerman, D. Geiger and D.M. Chickering. Learning bayesian networks: the combination of knowledge and statistical data. Proceedings of the 10th Conference on Uncertainty in Artificial Intelligence, Seattle, WA, 1994. 293~301
    [132]《现代应用数学手册》编委会.现代应用数学手册:概率统计与随机过程卷,北京:清华大学出版社,2002. 95~96
    [133]M.H. DeGroot. Optimal statistical decisions. New York: McGraw-Hill, 1970. 19~20
    [134]S. Kirkpatrick, C.D. Gelatt and M.P. Vecchi. Optimization by simulated annealing. Science, 220, No. 4598, 1983. 671~680
    [135]J.M. Pe?a, J.A. Lozano and P. Larra?aga. An improved bayesian structural EM algorithm for learning bayesian networks for clustering. Pattern Recognition Letters, 21(8), 2000. 779~786
    [136]A. Dempster, N. Laird and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. The Royal Statistical Society, Series B, 39(1), 1977. 1~38
    [137]D.L. Tennenhouse. Proactive computing. Communications of the ACM, 43(5), 2000. 43~50
    [138]G.Q. Xu, Z.Y. Feng, H.B. Wu and et al. Swift trust in a virtual temporary system: a model based on the Dempster-Shafer theory of belief functions. Electronic Commerce. 12, 2007. 93~126
    [139]L.J. Savage. The foundations of statistics. New York: Wiley, 1954.
    [140]J.M. Keynes. A treatise on probability. London: Macmillan And Co., 1921.
    [141]D. Meyerson, K.E. Weick and R.M. Kramer. Swift trust & temporary system. Trust in Organization, America: Stanford University, 1994. 221~264
NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.