基于肌肉运动的人脸表情识别
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
情感计算,是未来计算机领域发展的一个重要方向,是让计算机可以了解人的情感和情绪,并且能够以带有感情的方式与人进行交互。而最简单也是最直接的方式,就是从分析人类的面部表情入手进行研究。
     由于人类的面部表情是面部肌肉运动形成的,想要研究人类的面部表情,应该从面部肌肉的具体运动形式入手,动态跟踪这些运动。本文提出了基于肌肉运动的人脸表情识别方法。本文描述的工作基于艾克曼的FACS,心理学家的研究结果证明了,人类的表情对应这固定的面部肌肉运动形式,这是不受年龄、性别、种族、受教育程度等因素影响的。本文首先简要介绍了与面部基本表情相关的若干AU,基于这些AU运动,进行了一系列的研究工作,并获得了以下研究成果:
     1.本文提出了一种快速自动识别面部特征点的方法。当前有些面部表情研究是建立在手工标注面部特征点的基础上进行的。手工标注面部特征点,这一过程本身就加入了标注者对当前表情的主观判断。而当前有些自动识别特征点的方法速度很慢,根本不能满足实时分析系统的要求。本文提出的方法,是首先对目标区域进行一次筛选,选出所有纹理特征明显的像素点,然后在这些被筛选出的点中寻找真正的面部特征点。经过试验证明,这一方法速度快,识别率高,可以满足实时系统的需求。
     2.本文提出了一种基于运动模板的面部AU运动识别方法。大多数的AU运动或不存在明显的特征点,或难以使用跟踪特征点的方法准确识别。运动模板理论研究的是运动的本身和运动的历史,使用这一方法,可以准确的识别出需要的AU运动。对于感兴趣的AU,本文使用Boosting算法训练了专门的识别器来识别,这些识别器可以准确识别AU的运动。
     3.本文提出了一种新的识别头部运动姿态的方法。传统识别头部运动姿态的方法通过识别和跟踪眼睛区域的运动形式来研究头部运动姿态。而识别眼睛区域的工作量是很大的,需要通过大量运算才能准确识别并跟踪,或者使用特殊的设备支持。本文提出以更便于识别和跟踪的鼻孔来代替眼睛识别头部运动,又提出了一种不仅能识别点头和摇头,还能正确识别低头与侧脸的方法。
     4.本文描述了一个实时识别人的面部表情的系统的设计方法,并尝试了对“高兴”这一表情进行了层次上的划分。在本文中已经提出了很好的方法来准确识别面部肌肉的运动情况,本文把这些运动的情况输入到一个BP神经网络来识别这些表情,得到了不错的结果。在本文中,还使用Fuzzy理论,在已经识别出具体表情的基础上,通过分析MHI图像,对表情的程度进行了衡量,也得到了比较理想的结果。
Affective computing is an important field which computer science will focus onin the furture. It means making computers understand human feelings and emotions,and interact with people by having feelings of human. The simplest and most directway of the research is to begin with the analysis of human facial expression.
     Human facial expression is the formation of facial muscle movement. To studyfacial expressions of human, we should start with the research of specific movementof the human facial muscle, and focus on the movement itself. So we proposed amethod to research human facial expressions based on facial muscle movement. Ourwork is based on Ekman's FACS (Facial Action Coding System). Psychologists' studyhad demonstrated that human facial expressions correspond to a fixed form of musclemovement which is not subject to age, gender, race, education or other factors. In thisarticle, we first briefly introduced a number of Action Unit associated with the basicfacial expressions. Based on these Action Units, we conducted the following researchworks:
     1. We propose a approach which can recognize facial feature points quickly andautomatically. A lot of current researchs are based on marked the facial feature pointsmanually. But the process itself that hand-labeled facial feature points would join thesubjective judgments of current expressions by the researcher. And others method thatcan identify facial feature points automatically is too slow, so it can not meet therequirements of real-time analysis system. Our proposed approach is to conduct ascreening of the ROI (region of interest) firstly in order to select all possible facialfeature points which contain rich texture informations. Then these points are filteredout to find the real facial feature points. Experiment proved that the method have highrecognition rate, and it can meet the needs of a real-time system.
     2. We proposed an approach that can recognition Action Units based on motiontemplate. Most of the AUs have neither obvisous feature points, or the feature pointsof the AU is difficult to identify and track. Motion template research on movementitself and it's history. Using the method, we can identify the movement of AUs weinterest accurately. Using the Boosting algorithm, we trained several classifier toidentify AUs we interested to recognize facial expression, and these classifier workedvery well.
     3. We proposed a new method of indentifying the head movenent. Traditionalmethods identify human head movenent by recognizing and tracking the movement ofthe eye area. However, it is a great amount of work to identify the eye area on humanface, and it needs a large number of operations or some special device support inorder to identify and track eye accurately. We proposed that instead of eye, we canlocate and track nostrils to recognize the movement of head. Compared with eye, it iseasier to identify and track nostrils. Not only can our approach identify nod and shake,but also recognize bow and turn face aside.
     4. We designed a real-time system to identify facial expressions, and tried todivide the level of expression on the identified expression:"happy". In this article wehad made a very good way to identify the movement of facial muscles accurately, andwe put these identified movement into a BP (backpropagation) neural network torecognize expressions, and the network's output could classify expressions accurately.We also adopt the Fuzzy theory, and measured the degree of expression on arecognized "happy" by analyzing the MHI (motion history image), and has achieved asatisfactory result.
引文
[1]A. Sloman, M. Croucher,"Why Robots Will Have Emotions", ProceedingsIJCAI1981.
    [2]M. Minsky,"The Society of Mind", Simon and Schuster, New York, March15,1988, ISBN0-671-65713-5
    [3]R. W. Picard,"Affective Computing", MIT Press, London, England,1997
    [4]M. Pantic,L. J. M. Rothkranz,"Expert System for Automatic AnalysisofFacial Expression", Image and Computing, vol.18, no.11,2000, pp.881-905.
    [5]P. Ekman, W. V. Friesen, J. C. Hager,"Facial Action Coding System",Research Nexus, a subsidiary of Network Information Research Corporation,Made in the United States of America,2002, ISBN0-931835-01-1
    [6]A. F. Bobick, J. W. Davis,"An Appearance-based Representation ofAction", MIT Media Laboratory Perceptual Computing Section TechnicalReport No.369, International Comference on Pattern Recognition,1996
    [7]Y. WANG, J. DANG, L. CAI,"Investigate Acoustic Features of EmotionalSpeech with a Physiological Articulatory Model" NSCP'10, Hawaii, Mar3-5(with referee),2010.
    [8]M. Z. Poh, D. J. McDuff,R. W. Picard,"Advancements in Non-contact,Multiparameter Physiological Measurements Using a Webcam", IEEETransactions on Biomedical Engineering, vol.58, no.1, Jan2011, pp.7-11.
    [9]X. Zhou, W. Zheng, C. Zou, L. Zhao,"Facial expression recognitionbased on fuzzy-LDA/CCA", Journal of Southeast University, vol.24, no.4,Dec.2008, pp.428-432.
    [10]J. Cao, H. Wang, P. Hu, J. Miao,"PAD Model Based Facial ExpressionAnalysis", in Proc. ISVC (2),2008, pp.450-459.
    [11]Y. Wang, H. Ai, B. Wu, C. Huang,"Real time facial expressionrecognition with Adaboost", ICPR2004, Proceedings of the17thInternational Conference on Pattern Recognition, vol.3,2004, pp.926-929.
    [12]C. Kueblbeck, A. Ernst,"Face detection and tracking in videosequences using the modified census transformation", Journal on Image andVision Computing, vol.24, Issue6,2006, pp.564-572.
    [13]A. F. Bobick, J. W. Davis,"Real-time Recognition of Activity UsingTemporal Templates", IEEE Workshop on Applications of Computer Vision,Dec.1996,pp.39-42.
    [14]A. Kapoor, R. W. Picard,"A Real-time Head Nod and Shake Detector",MIT Media Laboratory Affective Computing Technical Report No.544,Proceedings from the Workshop on Perceptive User Interfaces, November2001.
    [15]W. Tan, G. Rong,"A Real-time Head Nod and Shake Detector using HMMs",Expert Systems with Applications, vol.25, Issue3, Oct2003, pp.461-466.
    [16]T. Cootes, C. Taylor, D. Cooper, J. Graham,"Active shape models-theirtraining and application", Journal of Computer Vision and ImageUnderstanding, vol.61, no.1,1995, pp.38-59.
    [17]T. Cootes, G. Edwards, C. Taylor,“Active appearance models”, IEEETransactions on Pattern Analysis and Machine Intelligence, vol.23, no.6,2001, pp.681-685.
    [18]L. G. Valiant,"A theory of the learnable", Communications of the ACM,vol.27, no.11, November1984, pp.1134-1142.
    [19]M. J. Kearns, U. V. Vazirani,"An Introduction to ComputationalLearning Theory", MIT Press,1994.
    [20]M. Kearns, L. G. Valiant,"Learning Boolean formulae or finiteautomata is as hard as factoring", Technical Report TR-14-88, HarvardUniversity Aiken Computation Laboratory, August1988.
    [21]M. Kearns, L. G. Valiant,"Cryptographic limitations on learningBoolean formulae and finite automata", Journal of the Association forComputing Machinery, vol.41, no.1, January1994, pp.67–95.
    [22]R.obert E. Schapire,"The strength of weak learnability", MachineLearning, vol.5, no.2,1990, pp.197–227.
    [23]H. Drucker, R. Schapire, P. Simard,"Boosting performance in neuralnetworks", International Journal of Pattern Recognition and ArtificialIntelligence, vol.7, no.4,1993, pp.705–719.
    [24]Y. Freund, R. E. Schapire,"A decision-theoretic generalization ofon-line learning and an application to boosting", Journal of Computer andSystem Sciences, vol.55, no.1, August1997, pp.119–139.
    [25]R. E. Schapire, Y. Singer,"Improved boosting algorithms usingconfidence-rated predictions", Machine Learning, vol.37, no.3,1999,pp.297-336.
    [26]A. Blumer, A. Ehrenfeucht, D. Haussler, M. K. Warmuth,"Learnabilityand the Vapnik-Chervonenkis dimension", Journal of the Association forComputing Machinery, vol.36, no.4, October1989, pp.929–965.
    [27]E. B. Baum, D. Haussler,"What size net gives valid generalization?"Neural Computation, vol.1, no.1,1989, pp.151–160.
    [28]L. Breiman,"Arcing classifiers", The Annals of Statistics, vol.26,no.3,1998, pp.801–849.
    [29]H. Drucker, C. Cortes,"Boosting decision trees", In Advances inNeural Information Processing Systems, vol.8,1996, pp.479-485.
    [30] P. L. Bartlett,"The sample complexity of pattern classification withneural networks: the size of the weights is more important than the sizeof the network", IEEE Transactions on Information Theory, vol.44, no.2, March1998, pp.525–536.
    [31]R. E. Schapire, Y. Freund, P. Bartlett, W. Lee,"Boosting the margin:A new explanation for the effectiveness of voting methods", The Annalsof Statistics, vol.26, no.5, October1998, pp.1651–1686.
    [32]J. Friedman, T. Hastie, R. Tibshirani,"Additive logistic regression:A statistical view of boosting", Annals of Statistics, vol.28, no.2,2000, pp.337–374.
    [33]A. Torralba, K. P. Murphy, W. T. Freeman,"Sharing features: efficientboosting procedures for multiclass object detection",2004IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition (CVPR'04)vol.2,2004.
    [34]R. Lienhart, A. Kuranov, V. Pisarevsky,"Empirical analysis ofdetection cascades of boosted classifiers for rapid object detection",Pattern Recognition, vol.2781/2003,2003, pp.297-304.
    [35]Y. Freund, R. E. Schapire,"Discussion of the Paper ‘AdditiveLogistic Regression: A Statistical View of Boosting’", The Annals ofStatistic, January28,2000.
    [36]P. Viola, M. Jones,"Robust real-time object detection", ICCVWorkshop on Statistical and Computation Theories of Vision,2001.
    [37]C. Papageorgiou, M. Oren, T. Poggio,"A general framework for objectdetection", In International Conference on Computer Vision,1998, pp.555-562.
    [38]F. Crow,"Summed-area tables for texture mapping", In Proceedings ofSIGGRAPH, vol.18, no.3,1984, pp.207–212.
    [39]G. Donato, et al.,"Classifying Facial Actions", IEEE Trans. PatternAnalysis and Machine Intelligence, vol.21, no.10,1999, pp.974-989.
    [40]J. G. Daugman,"Complete discrete2-D Gabor transforms by neuralnetworks for image analysis and compression", IEEE Transactions ofAcoustic, Speech and Signal Processing, vol.36, no.7,1988, pp.1169-1179.
    [41]T. S. Lee,"Image representation using2D Gabor wavelets", IEEETransations on Pattern Analysis and Machine Intelligence, vol.18, no.10,1996, pp.959-971.
    [42]D. Vukadinovic, M. Pantic,"Fully automatic facial feature pointdetection using gabor feature based boosted classifiers," In Proc. System,Man and Cybernetics, vol.2,2005, pp.1692-1698.
    [43]D. J. Gabor,"Theory of communication", Proceedings of the Instituteof Electrical Engineers, vol.93, no.26,1946, pp.429–457.
    [44]A. C. Bovik, M. Clark, W. S. Geisler,"Multichannel texture analysisusing localized spatial filters", IEEE Transactions on Pattern Analysisand Machine Intelligence,vol.12, no.1,1990, pp.57–73.
    [45]J. Jones, L.Palmer,"An evaluation of the twodimensional Gabor filtermodel of simple receptive fields in cat striate cortex", Journal ofNeurophysiology,1987, pp.1233-1258.
    [46]V. S. N. Prasad, J. Domke,"Gabor Filter Visualization",https://www.cs.umd.edu/class/spring2005/cmsc838s/assignment-projects/gabor-filter-visualization/report.pdf.
    [47]B. D. Lucas, T. Kanade,"An iterative image registration techniquewith an application to stereo vision," Proceedings of the1981DARPAImaging Understanding Workshop,1981, pp.121-130.
    [48]C. Tomasi, T. Kanade,"Detection and Tracking of Point Features",Technical Report CMU-CS-91-132.
    [49]J. LIU, Z. ZHAO,"Fully Automatic and Quickly Facial Feature PointDetection Based on LK Algorithm", Proceedings-7th InternationalConference on Networked Computing and Advanced Information Management,2011, pp.190-194.
    [50]G. Bradski, A. Kaehler,"Learning OpenCV: Computer Vision with theOpenCV Library", O'Reilly Media,2008.496.
    [51]L. G. Roberts,"Machine perception of three-dimensional solids", In:Optical and Electro-Optical Information Processing,1965, pp.159-197.
    [52]J. Canny,"A Computational Approach To Edge Detection", IEEE Trans.Pattern Analysis and Machine Intelligence, vol.8,1986, pp.679-714.
    [53]A. Bobick, J. Davis,"Real-time recognition of activity usingtemporal templates," IEEE Workshop on Applications of Computer Vision,December1996, pp.39-42.
    [54]J. Davis, A. Bobick,"The representation and recognition of actionusing temporal templates"(Technical Report402), MIT Media Lab,Cambridge, MA,1997.
    [55]J. Davis, G. Bradski,"Real-time motion template gradients usingIntel CVLib", ICCV Workshop on Framerate Vision,1999.
    [56]G. Bradski, J. Davis,"Motion segmentation and pose recognition withmotion history gradients", Machine Vision and Applications, vol.13, no.3,2002, pp.174-184.
    [57]W. T. Freeman, M. Roth,"Orientation histogram for hand gesturerecognition", In Int'l Workshop on Automatic Face-andGesture-Recognition,1995.
    [58]A. Bobick, J. Davis, S. Intille, ect al.,"Kids room: Actionrecognition in an interactive story environment", PerCom TR398, MIT MediaLab,1996.
    [59]A. Bobick, J. Davis,"An appearance-based representation of action",Proceedings of the13th International Conference on Pattern Recognition,vol.1, August1996, pp.307-312.
    [60]M. Black, Y. Yacoob,"Tracking and recognizing rigid and non-rigidfacial motion using local parametric models of image motion", ComputerVision, Proceedings of the5th International Conference on,1995, pp.374-381.
    [61]C. Cedras, M. Shah,"Motion-based recognition: A survey", Image andVision Computing,vol.13, no.2, March1995, pp.129-155.
    [62]K. Akita,"Image sequence analysis of real world human motion",Pattern Recognition, vol.17, Issue1,1984, pp.73-83.
    [63]L. W. Campbell, A. F. Bobick,"Recognition of human body motion usingphase space constraints", Computer Vision, Proceedings of the5thInternational Conference on,1995, pp624-630.
    [64]J. M. Rehg, T. Kanade,"Model-based tracking of self-occludingarticulated objects", Computer Vision, Proceedings of the5thInternational Conference on,1995, pp.612-617.
    [65]R. Polana, R. Nelson,"Low level recognition of human motion", In IEEEWorkshop on Motion of Non-rigid and Articulated Objects,1994, pp.77-82.
    [66]E. Shavit, A. Jepson,"Motion understanding using phase portraits",In IJCAI Workshop:Looking at People,1995.
    [67]Y. Yacoob, L. Davis,"Computing spatio-temporal representations ofhuman faces", Proceedings CVPR '94, IEEE Computer Society Conference onComputer Vision and Pattern Recognition,1994, pp.70-75.
    [68]A. Elgammal, D. Harwood, L. Davis,"Non-parametric Model forBackground Subtraction", ECCV'00Proceedings of the6th EuropeanConference on Computer Vision Part II,2000.
    [69]T. Horprasert, D. Harwood, L. S. Davis,"A Statistical Approach forReal-time Robust Background Subtraction and Shadow Detection", IEEEICCV(1999) vol.99,1999, pp.1-19.
    [70]M. Martins, B. Nickerson, V. Bostrom, R. Hazra,"Implementation ofa Real-time Foreground/Background Segmentation System on the IntelArchitecture", IEEE ICCV99Frame Rate Workshop,1999.
    [71]G. Bradski, A. Kaehler,"Learning OpenCV: Computer Vision with theOpenCV Library", America, O'Reilly Media, October1,2008, pp.341~342
    [72]R. Cuttler, M. Turk,"View-based interpretation of realtime opticalflow for gesture recognition", Proceedings of the3th IEEE InternationalConference on Automatic Face and Gesture Recognition,1998, pp.416-421.
    [73]J. LIU, Z. ZHAO, M. LI, C. LIU,"Action Unit Recognition Based onMotion Templates and GentleBoost", Proceedings-7th InternationalConference on Networked Computing and Advanced Information Management,2011, pp.195-199.
    [74]P. Viola, M. Jones,"Rapid object detection using a boosted cascadeof simple features", Proceedings of the2001IEEE Computer SocietyConference on Computer Vision and Pattern Recognition, CVPR, vol.1, Dec8–14,2001, pp. I-511-I-518.
    [75]J. Cassell,"Nudge Nudge Wink Wink: Elements of Face-to-FaceConversation for Embodied Conversational Agents", in EmbodiedConversational Agents. Cambridge, MA, MIT Press2000.
    [76]C. Darwin (1872),"The Expression of the Emotions in Man and Animals",third edition. New York, Oxford University Press,1998.
    [77]D. B. Givens,"The Nonverbal Dictionary of gestures, signs&bodylanguage cues", Spokane, Washington: Center for Nonverbal Studies Press.
    [78]D. Morris,"Bodytalk: The Meaning of Human Gestures", CrownPublishers, New York1994.
    [79]J. Heinzman, A. Zelinsky,"3-D Facial Pose and Gaze Point Estimationusing a Robust Real-Time Tracking Paradigm", Proceedings of IEEEInternational Conference on Automatic Face and Gesture Recognition,1998,pp.142-147.
    [80]Q. Chen, H. Wu, T. Fukumoto, M. Yachida,"3D Head Pose Estimationwithout Feature Tracking", Proceedings of IEEE International Conferenceon Automatic Face and Gesture Recognition,1998, pp.88-93.
    [81]H. A. Rowley, S. Baluja, T. Kanade,"Neural Network-Based FaceDetection", IEEE Transactions on Pattern Analysis and MachineIntelligence, vol.20, no.1, January1998, pp.23-38.
    [82]H. A. Rowley, S. Baluja, T. Kanade,"Rotation Invariant NeuralNetwork-Based Face Detection", Proceedings of IEEE Computer SocietyConference on Computer Vision and Pattern Recognition,1998.
    [83]Y. Tian, T. Kanade, J. F. Cohn,"Dual-state Parametric Eye Tracking",Proceedings of IEEE International Conference on Automatic Face andGesture Recognition,2000, pp.110-115.
    [84]A. Yuille, P. Haallinan, D. Cohen,"Feature Extraction from Facesusing Deformable Templates", International Journal of Computer Vision,vol.8, no.2,1992, pp.99-111.
    [85]C. Morimoto, D. Koons, A. Amir, M. Flickner,"Pupil Detection andTracking Using Multiple Light Sources", Image and Vision Computing, vol.18, Issue4,2000, pp.331-335.
    [86]A. Haro, I. Essa, M. Flickner,"Detecting and Tracking Eyes by Usingtheir Physiological Properties, Dynamics and Appearance", Proceedings ofIEEE Computer Society Conference on Computer Vision and PatternRecognition, vol.1, June2000, pp.163-168.
    [87]S. Kawato, J. Ohya,"Real-time Detection of Nodding and Head-shakingby Directly Detecting and Tracking the 'Between-Eyes'", Proceedings ofIEEE International Conference on Automatic Face and Gesture Recognition,2000, pp.40-45.
    [88]J. LIU, Z. ZHAO,"Head Movement Recognition Based on LK AlgorithmandGentleBoost", Proceedings-7th International Conference on NetworkedComputing and Advanced Information Management,2011, pp.232-236.
    [89]W. McCulloch, W. Pitts,"A logical calculus of the ideas immanent innervous activity", Bulletin of Mathematical Biology, vol.5, no.4,21December1943, pp.115-133.
    [90]S. Russell, P. Norvig,"Artificial Intelligence: A ModernApproach (2nd ed.)", Englewood Cliffs, New Jersey: PrenticeHall, ISBN0-13-790395-2.
    [91]J.J. Hopfield,"Neural networks and physical systems with emergentcollective computational abilities", Proceedings of The National Academyof Sciences, vol.79. no.8,1982, pp.2554-2558.
    [92]J. J. Hopfield, D. W. Tank,"'Neural’ computation of decisions inoptimization problems", Biological Cybernetics, vol.52, no.3,1985, pp.141–152.
    [93]D. W. Tank, J. J. Hopfield,"Simple‘neural’optimization networks:An A/D converter, signal decision circuit, and a linear programmingcircuit", IEEE Transactions on Circuits and Systems, vol.33, Issue5,1986, pp.533–541.
    [94]D. E. Rumelhart, G. E. Hinton, R. J. Williams,"Learning InternalRepresentations by Error Propagation", Development, vol.1, Issue5,1986,MIT Press, pp.318-362.
    [95]Neural Network Toolbox User's Guide. The Mathworks, inc.1999.
    [96]R. Hecht-Nielsen,"Theory of the Backpropagation neural network",International Joint Conference on Neural Networks, IJCNN., vol.1,1989,pp.593-605.
    [97]B. Widrow, M. E. Hoff,"Adaptive Switching Circuits", IRE WESCONConvention Record,1960, pp.96–104.
    [98]S. E. Fahlman,"Faster-learning variations on backpropagation: Anempirical study", Proceedings of the1988connectionist Summer School1988, pp.38-51.
    [99]R. A. Jacobs,"Increased rates of convergence through learning rateadaptation", Neural Networks, vol.1, no.4,1988, pp.295-307.
    [100]S. Shar, F. Palmieri,"MEKA--a fast, local algorithm for trainingfeedforward neural networks", Proceedings of the International JointConference on Neural Networks, vol.3,1990, pp.41-46.
    [101]R. L. Watrous,"Learning algorithms for connectionist networks:Applied gradient methods of nonlinear optimization", Proceedings of theIEEE International Conference on Neural Networks, vol2,1987. pp.619--627.
    [102]S. Shar, F. Palmieri, M. Datum,"Optimal filtering algorithms forfast learning in feedforward neural networks", Proceedings of theInternational Joint Conference on Neural Networks, vol.5, no.5,1992,pp.779-787.
    [103]A. Mehrabian,"Pleasure-Arousal-Dominance: a general framework forDescribing and measuring individual differences in temperament",Current Psychology, vol.14, no.4,1996, pp.261-292.
    [104]J. Cao, H. Wang, P. Hu, J. Miao,"PAD Model Based Facial ExpressionAnalysis," ISVC2008, Part II, LNCS5359,2008, pp.450-459.
    [105]L. A. Zadeh,"Fuzzy sets", Information and Control, vol.8, no.3,1965, pp.338-353.
    [106]L.A. Zadeh,"Commonsense Knowledge Representation Based on FuzzyLogic", Computer, vol.16, no.10,1983, pp.61-65.
    [107]杨敏生,模糊理论简介,数学传播,第18卷第1期,1994, ISSN:1023-7526.
    [108]J. C. Bezdek,"Pattern Recognition with Fuzzy Objective FunctionAlgorithms", Kluwer Academic Publishers Norwell, MA, USA,1981,ISBN:0306406713.
    [109]S. Chiu,"Fuzzy Model Identification Based on Cluster Estimation",Journal of Intelligent Fuzzy Systems, vol.2, no.3,1994, pp.267-278.