面向观众的电影情感内容表示与识别方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着数字音视频数据的爆炸性增长,从这些非结构化数据中自动提取富含语义的内容成为当前面临的一项挑战,由此引发的相关研究热潮催生了基于内容视频检索(CBVR)这一研究课题。视频情感内容是人们理解视频内容时的一个重要但是经常被研究者忽略的因素。作为CBVR研究领域中的一个新兴研究方向,视频情感计算可以利用CBVR和情感计算的相关理论理解视频情感内容。但是,由于人类情感与低层特征之间存在较大的“情感鸿沟”,目前仍然缺乏一个统一的理论框架用于视频情感内容理解。在此背景下,以电影视频作为研究对象提出一种面向观众的电影视频情感内容表示与识别方法。
     为了有效表示电影视频的情感内容同时反映观众的个性情感特征,提出一种面向观众的电影情感空间建模方法。通过引入典型模糊情感子空间的概念,该模型可以统一离散和连续两大流派的心理学情感模型。模型采用模糊C-均值聚类算法划分情感空间,利用高斯混合模型确定划分出的典型模糊情感子空间的情感隶属度函数。该电影情感空间可以反映观众的个性化情感体验,能够在情感空间中定义典型情感状态区域,并且能够方便地计算各种情感状态的情感强度。实验结果验证了建模方法的有效性,并且表明该电影情感空间可以面向观众地表示电影情感内容。
     为了在低级特征与电影情感内容之间存在的“情感鸿沟”之上架起桥梁,依照情绪心理学和电影创作的相关理论,设计、提取和选择了一组电影情感特征。利用Whitney特征选择算法选择出两组电影情感特征向量,其中一组用于描述情感诱力,另一组用于描述情感激励。实验结果表明,提出的电影情感特征向量在区分情感诱力和激励的正负时优于现有研究结果。
     为了有效检测电影情感内容,提出一种基于激励曲线和电影情感树的多级电影视频摘要生成算法。基于此算法,可以从原始影片中检测出情感语义较为显著的部分。激励曲线是一种可以用来度量电影观众情绪兴奋度随电影情感内容起伏变化的曲线。首先利用激励曲线定位不同情感粒度的电影情感单元,然后将这些情感单元按情感粒度大小逐级组织起来即可生成电影情感树。电影情感树的每层节点都对应着原始电影的一个电影视频摘要。为了识别电影视频摘要中各情感单元的情感内容,研究中提出两种情感识别方法:基于基因-隐马尔可夫联合模型(GA-HMM)的情感内容识别方法和基于情感空间的情感内容识别方法。GA-HMM情感识别器可以用于识别观众的基本情感事件。实验结果表明与传统的隐马尔可夫模型相比,GA-HMM可以在减小计算量的同时获得更高的情感识别率。基于情感空间的情感内容识别方法采用多层感知机和多元线性回归计算电影情感单元的情感坐标。基于电影情感单元的情感坐标和电影情感空间的情感隶属度函数,该方法提出“最大隶属原则”和“阈值原则”,用来表示和识别观众观影过程中的个性化情感体验。实验结果表明,该方法能够有效地表示和识别个性化电影情感内容。
     电影情感内容表示与识别需要研究的问题还很多。在电影情感空间建模方面,现有的建模方法完全依赖观众自己标注的情感评价数据建模,给用户带来的负担较重,如何利用已有的其他用户的情感数据为一个新用户服务是未来的一个研究重点。由于人类情感与视觉和听觉之间的内在联系尚不明朗,现有的电影情感特征向量在区分情感诱力正负时的识别精度还不够理想,必须进一步结合领域知识设计更加合理的情感特征向量。此外,建立面向观众的电影情感空间时考虑的观众群还比较有限。为了能够更准确地描述观众的个性情感信息,在今后的研究中还需要进一步扩大采集情感信息的观众群。
With the proliferation of digital audiovisual, the challenge of extracting meaningful content from such data sets has lead to research and development in the area of content based video retrieval (CBVR). An important and often overlooked aspect of human interpretation of video data is the affective dimension. To address this problem, video affective computing is proposed, which is one of the latest research areas and can utilize both CBVR and affective computing theories to understand video affective content. However, due to the inscrutable nature of human emotions and seemingly broad "affective gap" from low-level features, there is still lacking a unified theoretical framework for video affective content understanding. Taking the film as study object, a solution for audience oriented affective content representation and recognition is presented.
     To represent film affective content effectively and describe the personalization of audience faithfully, an audience oriented film emotion space is proposed. It can unify the discrete and dimensional emotion model by introducing the typical fuzzy emotion subspace. Fuzzy C-mean clustering algorithm is adopted to divide the emotion space. Gaussian mixture model is used to determine membership functions of typical affective subspaces. At every step of modeling the space, the inputs rely completely on the affective experience recorded by the audiences. The advantages of the audience oriented film emotion space are the personalization, the ability to define typical affective states areas in the emotion space, and the convenience to explicitly express the intensity of each affective state. The experimental results validate the model and show it can be used as an audience oriented emotion space for film affective content representation.
     To bridge the "affective gap" between low-level features and film affective content, a set of film affective features are designed, extracted and selected. These film affective features are designed according to the theories of emotional psychology and filmmaking. Whitney feature selection algorithm is implemented and two sets of film affective feature vectors are formulated. One is for describing the affective valence and the other is for describing the affective arousal. The comparative experiments show that the proposed film affective features outperform the existing studies in classifying the positive and negative of the affective valence and arousal.
     To recognize the film affective content, the affective highlight in the film should be detected in the first place. A multilevel film summary is proposed based on the arousal curve and film affective tree (FAT). The arousal curve indicates how the intensity of the emotional load changes along a film, and depicts the expected changes in audience's arousal intensities while watching that film. Film affective units (AU) in different granularities are firstly located by arousal curve, and then the selected affective content units are used to construct the FAT. The AU at each level of the FAT can be organized as a film summary. Two methods are proposed to recognize the affective content of AU in the summary, which are the genetic algorithm combined hidden marcov model (GA-HMM) based affective content recognition and the emotion space based affective content recognition. The first method can be used to recognize the basic emotional events of audience. The experimental results show that GA-HMM can achieve higher recognition rate with less computation compared with classic HMM. The second method adopts multi-layer perceptron and multiple linear regression to compute the emotion coordinates of the AUs in film summary. Based on the affective membership functions and emotion coordinates, the maximum membership principle and the threshold principle are introduced to represent and recognize the emotional preferences of the audiences. Experimental results demonstrate that this method can effectively represent and recognize the personalized film affective content.
     There are many research issues exist in the audience oriented affective content representation and recognition. The proposed film emotional space depends entirely on the affective evaluation of the audience, which is a tedious and heavy burden to the user. How to make good use of existing data to service other users is an important research issue in the future. Because of the relation between emotion and audio-visual is still unclear, the selected film affective feature vectors are not perfect for classifying the positive and negative of the emotional valence. Further investigation on the domain knowledge should be implemented to design the more reasonable film affective features. Furthermore, the emotional information of the audiences is not comprehensive enough. To describe the information of the audience's emotional personality more accurately, the coverage of the audience should be further expanded.
引文
[1] Picard R W. Affective Computing. London: MIT Press, 1997. 47-84,141-165
    [2] Hanjalic A. Content-Based Analysis of Digital Video. Boston: Kluwer Academic,2004. 102-117
    [3] Michael S. Lew, Nicu Sebe, Chabane Djeraba, et al. Content-based Multimedia Information Retrieval State of the Art and Challenges. ACM Transactions on Multimedia Computing, Communications and Applications, 2006, 2(1): 1-19
    [4] Hanjalic A., Lagendijk R.L., Biemond J. Video Content Analysis: From Low-level Features to Video Semantics. International Journal of Image and Graphics, 2001,1(1): 63-82
    [5] Hanjalic A. Video and Image Retrieval beyond the Cognitive Level: the Needs and Possibilities. In: Proceedings of Storage and Retrieval for Media Databases.San Jose, 2001. 130-140
    [6] Hanjalic A. Extracting Moods from Pictures and Sounds: Towards Truly Personalized TV. IEEE Signal Processing Magazine, 2006,23(2): 90-100
    [7] Aigrain P., Joly P. The automatic Real-time Analysis of Film Editing and Transition Effects and Its Applications. Computer Graphics, 1994,18(1): 93-103
    [8] Zhang H. J., Smoliar S.W. Developing Power Tools for Video Indexing and Retrieval. In: Proceedings of Storage and Retrieval for Image and Video Databases. San Jose, 1994.140-149
    [9] Nagasaka A., Tanaka Y. Automatic Video Indexing and Full-video Search for Object Appearances. In: E. Knuth and L.M. Wegner, Editors, Visual Database Systems ?, North-Holland, Amsterdam, 1991. 119-133
    [10] Zhang H.J., Low C.Y., Smoliar S.W., et al. Video Parsing, Retrieval and Browsing:An Integrated and Content-Based Solution. In: Proceedings of the third ACM international conference on Multimedia. San Jose, 2002. 350-359
    [11] Gavin Helen. Understanding Research Methods and Statistics in Psychology.London: SAGE, 2008. 23-62
    [12] Ahn H.I., Picard, R.W. Affective Cognitive Learning and Decision Making: A Motivational Reward Framework For Affective Agents. In: Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction. Beijing, 2005.866-873
    [13] Burleson W., Picard R.W. Affective Agents: Sustaining Motivation to Learn Through Failure and a State of Stuck. In: Proceedings of Social and Emotional Intelligence in Learning Environments Workshop in Conjunction with the 7th International Conference on Intelligent Tutoring Systems. Brasil, 2004. 132-141
    [14] Chang-Neng Zhou, Xue-Li Yu, Jing-Yu Sun, et al. Affective Computation Based NPC Behaviors Modeling. In: Proceedings of the International Conference on Web Intelligence and Intelligent Agent Technology. 2006. 343-346
    [15] Yamada, T.; Hashimoto, H.; Tosa, N. Pattern recognition of emotion with neural network. In: Proceedings of the International Conference on Industrial Electronics, Control, and Instrumentation. Tokyo, 1995. 183-187
    [16] Manjunath B.S., Phillipe Salembier, Thomas Sikora. Introduction to MPEG-7: Multimedia Content Description Interface. New York: John Wiley & Sons, Inc.,2002. 123-178
    [17] Burnett I.S., Davis S. J., Drury G. M. MPEG-21 Digital Item Declaration and Identification-Principles and Compression, IEEE Transactions on Multimedia, 2005, 7(3): 400-407
    [18] Yong Rui, Zhou S.X., Huang T. S. Efficient Access to Video Content in a Unified Framework. In: Proceedings of IEEE International Conference on Multimedia Computing and Systems. Florence, 1999. 735-740
    [19] Yeung, M.M., Boon-Lock Yeo. Time-constrained Clustering for Segmentation of Video into Storyunits. In:.Proceedings of the 13th International Conference on Pattern Recognition. Vienna, 1996. 375-380
    [20] Smeaton. A. F., Over P., Kraaij W. Evaluation campaigns and TRECVid. In:Proceedings of the 8th ACM International Workshop on Multimedia Information Retrieval, Santa Barbara, California, 2006.26-27
    [21] Gary Marchionini, Gary Geisler. The Open Video Digital Library. D-Lib Magazine, 2002, 8(12): 197-217
    [22] Aaron Sloman, Monica Croucher. Why Robots Will Have Emotions. In:Proceedings of 7th International Joint Conferences on Artificial Intelligence.Vancouver, 1981.197-202
    [23] Picard R W. Affective Computing: challenges. International Journal of Human-Computer Studies. 2003, 59(1-2): 55-64
    [24] Andrl(?) E., Klesen M., Gebhard P., Allen, et al. Integrating models of personality and emotions into lifelike characters. In: Proceedings of Affect In Interactions,Heidelberg: Springer, 2000. 150-165
    [25] Aaron Sloman. What Is The Emotion Theories About? Architectures for Modeling Emotion, Cross-Disciplinary Foundations American Association for Artificial Intelligence 2004 Spring Symposium, California, 2004. 37-45
    [26] Sloman, A. Varieties of Affect and the CogAff Architecture Schema. Available on http://www.cs.bham.ac.uk/research/cogaff/
    [27] EMBASSI (Elektronische Multimediale Bedien-undService ssistenz), Available on http://www.embassi.de/estart.html
    [28] Sch(u|¨)tte S., Eklund J., Axelsson J. R. C., et al. Concepts, Methods and Tools in Kansei Engineering. Theoretical Issues in Ergonomics Science, 2004, 5(3):214-232
    [29] Zhiliang Wang, Lun Xie. Artificial Psychology-an Attainable Scientific Research on the Human Brain. In: Proceedings of the Second International Conference on Intelligent Processing and Manufacturing of Materials. Honolulu,1999. 1067-1072
    [30] 孟秀艳,王志良,李娜等.情感机器人的情感模型研究.计算机科学,2008,35(06):158-162
    [31] 解迎刚,王志良,平安等.模糊情感分析及其在E-learning系统中的应用.计算机科学,2007,34(10):168-172
    [32] 杨国亮,任金霞,王志良.基于情绪心理学的情感建模.计算机工程.2007,33(22):209-212
    [33] 赵积春,王志良,王超.情绪建模与情感虚拟人研究.计算机工程,2007,33(01):212-215
    [34] 解迎刚,王志良,永井正武等.基于Agent技术的人性化E-learning系统研究.计算机工程,2007,33(06):41-44
    [35] 王志良 乔向杰 王超等.基于自定义空间和OCC模型的情绪建模研究.计算机工程.2007,33(04):189-192
    [36] 张雪元,王志良,永井正武.机器人情感交互模型研究.计算机工程,2006,32(24):6-9
    [37] 宋亦旭,贾培发.基于人工情感的拟人机器人控制体系结构.机器人.2004,26(6):491-495
    [38] 王延江,袁保宗.多模态人机交互中基于笔输入的手势识别.北方交通大学学报.2001,25(2):10-13
    [39] 陶建华,谭铁牛.语音和人脸表情同步的双模态情感表达研究.见:第一届中国情感计算及智能交互学术会议论文集.北京,2003.101-105
    [40] 周建武,戴国忠.用户界面评估系统UIEV_Pro的设计与实现.计算机辅助设计与图形学学报.1998,17(2):65-72
    [41] 朱朝晖,潘志庚,唐冰等.E-TEATRIX中虚拟人物及情绪系统构造.第一届中国情感计算及智能交互学术会议论文集.北京,2003.127-131
    [42] Canamero, D., Modeling motivations and emotions as a basis for intelligent behavior. In: Proceedings of the First International Conference on Autonomous Agents, New York, 1997. 148-155
    [43] Ortony A., Clore G. L., Collins A. The cognitive structure of emotions. Cambridge:Cambridge University Press, 1988. 162-178
    [44] Elliott, C. The Affective Reasoner: A Process Model of Emotions in a Multi-agent System. Ph.D. Dissertation, Northwestern University, The Institute for the Learning Sciences, Technical Report No.32. 1992. 1-32
    [45] Rcilly W. S. Believable Social and Emotional Agents. PhD thesis. Technical Report CMU-CS-96-138, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. 1996. 79-123
    [46] P. Ekman. An Argument for Basic Emotions. Cognition and Emotion. 1992, 6(3/4):169-200
    [47] Tomkins, Silvan S., Samuel Messick. Computer Simulation of Personality:Frontier of Psychological Theory. New York: Wiley, 1963.165-231
    [48] Plutchik R. The Psychology and Biology of Emotion. Harper Collinns Ed., New York, 1994. 234-257
    [49] Ekman P. Facial Expression and Emotion. American Psychologist. 1993, 48(4):384-392
    [50] Schlosberg H. Three Dimensions of Emotion. Psychological Review. 1954, 61(2):81-88
    [51] Bradley M. M., Lang P. J. International Affective Digitized Sounds (IADS):Technical Manual and Affective Ratings. Gainesville, FL: Center for Res.Psychophysiol., Univ. Florida, 1991.143-151
    [52] Lang P J, Bradley M M, Cuthbert B N. International affective picture system (IAPS): instruction manual and affective ratings. University of Florida: The Center for Research in Psychophysiology, 2005.1-25
    [53] Bradley M. M., Cuthbert B. N., Lang P. J. Picture Media and Emotion: Effects of a Sustained Affective Context. Psychophysiology. 1996, 33:662-670
    [54] Atsuo Yoshitaka, Takakazu Ishii, Masahito Hirakawa. Content-Based Retrieval of Video Data by the Grammar of Film. In: Proceedings of IEEE Symposium on Visual Languages. Florida, 1997. 310-317
    [55] Adams B., Dorai C., Venkatesh S. Novel Approach to Determining Tempo and Dramatic Story Sections in Motion Pictures. In: Proceedings of International Conference on Image Processing. Vancouver, 2000. 283-286
    [56] Kang H B. Affective Content Detection Using HMMs. In: Proceedings of the eleventh ACM international conference on Multimedia, Berkeley, 2003.259-262
    [57] Aner-Wolf A. Determining.a Scene's Atmosphere by Film Grammar Rules. In:Proceedings of International Conference on Multimedia and Expo. California,2003.365-368
    [58] Ching Hau Chan, Gareth J. F. Jones. Affect-Based Indexing and Retrieval of Films. In: Proceedings of the 13th annual ACM international conference on Multimedia. Florida, 2004. 427-430
    [59] Xu M, Chia L T, Jin J. Affective content analysis in comedy and horror videos by audio emotional event detection. In: Proceedings of International Conference on Multimedia and Expo. Amsterdam, 2005. 342-347
    [60] Hanjalic A, Xu L. Q. Affective video content representation and modeling. IEEE Transactions on Multimedia, 2005, 7(1): 143-154
    [61] Hee Lin Wang, Loong-Fah Cheong Affective understanding in film. IEEE Transactions on Circuits and Systems for Video Technology, 2006, 16(6):689-704
    [62] 王上飞,陈恩红,李金龙等.基于内容的交互式感性图像检索.中国图像图形学报,2001,6(10):969-973
    [63] Sun Kai, Yu Junqing, Huang Yue, Hu Xiaoqiang, Liu Qing. A Personalized Emotion Space for Video Affective Content Representation. Wuhan University Journal of Natural Sciences, 2009.已录用
    [64] 孙凯,于俊清,张强,张宝印.面向观众的个性化电影情感内容表示与识别.计算机辅助设计与图形学学报,2009.已录用
    [65] Kai Sun, Junqing Yu. Video affective content representation and recognition using video affective tree and hidden markov models. In Proe. of the second International Conference on Affective Computing and Intelligent Interaction (ACII2007), Lisbon, 2007. 594-605
    [66] Kai Sun, Junqing Yu, Yue Huang and Xiaoqiang Hu, An Improved Valence-Arousal Emotion Space for Video Affective Content Representation and Recognition. In Proc. of IEEE International Conference on Multimedia and Expo (ICME2009), Cancun, 2009.已录用
    [67] Russell J A. The Circumplex Model of Affect. Journal of Personality and Social Psychology. 1980, 39(6): 1161-1178
    [68] The Internet Movie Database (IMDB). http://www.imdb.com/chart/top
    [69] Feeltrace. http://www.dfki.de/~schroed/feeltrace/
    [70] OpenCV. http://www.openev.org.cn/
    [71] Wang Y., Xie L., Chang S.-F. VisGenie: a Generic Video Visualization System.Columbia University ADVENT Technical Report #210-2005-4, 2005. 1-8
    [72] Robert L C, Jitendra V D, Bezdek J C. Efficient Implementation of the Fuzzy C-means Clustering Algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, 8(2):248-255
    [73] Zhuang Xinhua, Huang Yan, Palaniappan K.,et al. Gaussian mixture density modeling, decomposition, and applications. IEEE Transactions on Image Processing. 1996, 5(9): 1293-1302
    [74] J. Monaco. How to Read the Film: the Art, Technology. Language, History and Theory of Film and Media. Oxford University Press, 1981.34-123
    [75] Giannetti L. D. Understanding Movies. 2nd ed. Englewood Cliffs: Prentice-Hall,1976. 146-181
    [76] Benjamin H. Detenber, Robert F. Simons, Jason E. Reiss. The Emotional Significance of Color in Television Presentations. Media Psychology. 2000, 2(4):331-355
    [77] Cheng-Yu Wei, Nevenka Dimitrova, Shih-Fu Chang. Color-Mood Analysis of Films Based on Syntactic and Psychological Models. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME), Taipei, 2004.1134-1137
    [78] B.H. Detenber, R. F. Simons, G, G. Bennett. Roll 'em! The Effects of Picture Motion on Emotional Responses. Journal of Broadcasting & Electronic Media.1997, 21(1): 112-126
    [79] Iain R. Murray, John L. Arnott. Implementation and Testing of a System for Producing Emotion-By-Rule in Synthetic Speech. Speech Communication. 1995,16(1): 369-390
    [80] 韩文静,李海峰等.基于长短时特征融合的语音情感识别方法.清华大学学报(自然科学版),2008.48(S1):18-27
    [81] Davis, S., Mermelstein P. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1980, 28(4):357-366
    [82] Vergin, R., O'Shaughnessy, D., Farhat, A. Generalized mel frequency cepstral coefficients forlarge-vocabulary speaker-independent continuous-speech recognition. IEEE Transactions on Speech and Audio Processing. 1999, 7(5): 525-532
    [83] Belcher E. O., Hatlestad S. Formant frequencies, bandwidths, and Qs in helium speech. The Journal of the Acoustical Society of America, 1983, 74(2):428-432
    [84] Bordwell D., Thompson K. Film Art: An Introduction and Film Viewers Guide.7th ed. New York: McGraw-Hill, 2003.235-248
    [85] Lie Lu, Dan Liu, Hong-Jiang Zhang. Automatic Mood Detection and Tracking of Music Audio Signals. IEEE Transactions on Audio, Speech, and Language Processing, 2006, 14(1): 5-18
    [86] Dan-Ning Jiang, Lie Lu, Hong-Jiang Zhang, et al. Music type classification by spectral contrast feature. In: Proceeding of IEEE International Conference on Multimedia and Expo, 2002, 113-116
    [87] Juslin P. N.. Cue utilization in communication of emotion in music performance:relating performance to perception. Journal of experimental psychology: Human perception and performance, 2000, 26(6): 1797-1813
    [88] Whitney A. A direct method of nonparametric measurement selection. IEEE Transaction on Computers, 1971, 20(9): 1100-1103
    [89] Ian H. Witten, Eibe Frank. Data Mining: Practical machine learning tools and techniques. 2nd Edition. San Francisco: Morgan Kaufmann, 2005. 126-213
    [90]. George H. John, Pat Langley. Estimating Continuous Distributions in Bayesian Classifiers. In: proceedings of Eleventh Conference on Uncertainty in Artificial Intelligence. San Mateo, 1995. 338-345
    [91] Ross Quinlan. C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann Publishers, 1993.1-43
    [92] LeCun Y., Bottou L., Orr G.B., et al. Efficient backprop. In: Neural Networks-Tricks of the Trade. Springer Lecture Notes in Computer Sciences 1524, 1998.5-50
    [93] Yang Yi-Hsuan, Lin Yu-Ching, Su Ya-Fan, et al. Regression Approach to Music Emotion Recognition. IEEE Transactions on Audio, Speech, and Language Processing. 2008, 6(2): 448-457
    [94] Kaiser, J. F. Nonrecursive Digital Filter Design Using the I0- sinh Window Function. In: Proceedings of IEEE Symp. Circuits and Systems. Denver, 1974.20-23
    [95] Rabiner L.: A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In: Proceedings of IEEE. 1989, 77(2): 256-286
    [96] Coley D. A. An introduction to genetic algorithms for scientists and engineers.World Scientific Press, 1999. 210-231
    [97] Weymaere N, Martens J P. On the initialization and optimization of MLP. IEEE Transactions on Neural Networks, 1994 5(5):738-751
    [98] Kohonen T. The self-organizing Map. In: Proceedings of The IEEE, 1990,78(9): 1464-1480
    [99] Oliver Doehrmann, Marcus J. Naumer. Semantics and the multisensory brain:How meaning modulates processes of audio-visual integration. Brain Research.2008, 1242:136-150
    [100] Seber, G. Multivariate Observations. New York: Wiley, 1984. 37-72
    [101] James C. Bezdek, Mikhil R. Pal, James Keller, et al. Fuzzy Models and Algorithms for Pattern Recognition and Image Processing. Norwell: Kluwer Academic Publishers, 1999. 149-230

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700