非均匀光照和局部遮挡情况下的鲁棒表情识别理论与方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
表情是人类用来表达情绪的一种有效手段,是人机交互与情感计算研究的重要组成部分。人脸表情识别系统具有广阔的应用前景和实用价值,是当前国内外人工智能和模式识别领域的研究热点之一。由于人类表情的复杂性和微妙性使得表情识别变得很具有挑战性与艰巨性。就整个识别系统而言,虽然目前处理技术获得长足发展,但是从识别系统的性能来看,与实用化还有一定的距离,仍存在很多问题需要深入的研究。其中一个重要原因就是人脸表情识别问题会受到许多因素的影响,比如光照变化、脸部遮挡、姿态变化等。本文从表情识别的鲁棒性入手,主要研究在非均匀光照和局部遮挡情况下的人脸表情识别问题,意在提高表情识别正确率的同时增强算法的鲁棒性。本文主要针对静态人脸表情图片的特征提取与表情分类等问题进行了深入研究,研究内容与创新性工作主要如下:
     第一,由于表情变化富含纹理信息,而且不同的表情行为包含不同的尺度信息,因此采用Gabor滤波器提取人脸面部表情特征。针对传统Gabor特征表征面部表情全局能力弱并且特征数据存在一定的冗余的问题,提出了基于Gabor多方向特征融合与分块直方图相结合的表情特征提取方法。为了提取局部方向信息并降低特征维数,同时考虑到人脸表情行为的多尺度特性,提出两种特征融合规则,将同一尺度不同方向的Gabor特征进行融合,有效的降低了特征维数,减少了计算量和内存的需求。同时,直方图能够有效的表征图像全局特征,将Gabor特征与分块的直方图结合起来,可以多层次、多分辨率地表征人脸表情局部特征及局部邻域内的特征。实验结果表明它可以有效地提取人脸表情特征,提高人脸表情识别的精确度。
     第二,针对光照变化对表情识别带来的负面影响,以及传统的二维图像光照预处理方法会降低原始图像质量,丢失部分有效的辨识信息的缺点,提出了基于对称双线性模型的光照鲁棒性表情识别方法。通过双线性模型将带有未知光照的人脸表情图片中相互独立的光照与表情信息进行分离,分别构建独立的光照子空间与表情子空间,从而达到独立分析与处理的目的。通过将商光照的概念引入到双线性模型的框架中。使得所有未知光照的待测试表情图像转换到若干已知相同的光照平台上,令所有测试图像具有归一化的特性。同时,用转换后的多幅表情图像来表征原始的一张表情图像,这样就能将多幅表情特征统计起来,能使表情变化的有效辨识信息得到累加,增强表情图像的区分度,有效地提高分类精度。
     第三,针对独立个体面部存在部分遮挡物而不易进行人脸表情识别,且全局特征对于面部局部遮挡不具备鲁棒性的问题,提出了一种基于局部Gabor特征径向网格编码的局部遮挡表情特征提取方法。根据视网膜和视皮层上两个相邻的细胞的感受野存在部分重复的特性,将人脸表情图像分割成相邻区域存在50%重叠的子块,对每个子块内的Gabor特征采用径向网格编码策略。这样既能有效的模拟视网膜的成像,又能够降低Gabor特征数据的冗余。所得到的特征向量对于面部存在部分遮挡人脸表情具有很高的辨别能力。
     第四,由于面部存在遮挡时的表情特征提取分析采用的是局部特征,针对全局核支持向量机无法处理局部特征,容易受到噪声和遮挡的影响,对遮挡不具鲁棒性的缺点,提出基于局部累加核支持向量机的人脸表情分类方法。局部累加核满足Mercer理论,可以确保获得全局最优解。将由面部存在遮挡的表情图像获取的各个局部特征作为支持向量机局部核的输入,利用局部核来处理局部特征,最后对所有局部核输出进行累加整合,实现了对部分遮挡人脸表情的鲁棒性识别。通过实验验证,本文所提的局部径向基累加核支持向量机和局部归一化线性累加核支持向量机策略简单有效,易于应用。而且本方法不仅能够较好的识别部分遮挡人脸表情,对于无遮挡人脸表情的识别率也高于传统的全局核SVM。
     最后,总结了全文所做的工作,提出了今后进一步需要研究的问题。
Facial expression is an effective way for humans to express their emotions, which is animportant part of human-computer interaction and affective computing. At present, facialexpression recognition system has broad application prospects and practical value, which isan active topic in the field of artificial intelligence and pattern recognition in the world. It isvery diffcult to recognize facial expression due to the complexity and subtlety of humanfacial expression. As far as the whole recognition system, the technology of facial expressionrecognition has got rapid progress. However, the performance of the facial expressionrecognition system is not sufficient to practical application and there are still many problemsneed to be further researched. One of the most important reasons is that facial expression willbe influenced by many factors, such as illumination, occlusion, pose and so on. In this paper,we put the emphases of the research on robust facial expression recognition undernon-uniform illumination and partial occlusion in order to improve the accuracy androbustness of expression recognition arithmetic in static facial expression image. The mainresearch contents and innovative work in this paper are shown as follows:
     First, in order to extract the texture information of facial expression and different scaleinformation of different expression behavior, we use Gabor filters to extract the facialexpression features. The Gabor multi-orientation fused features are combined with blockhistogram to extract facial expressional features in order to overcome the disadvantage oftraditional Gabor filter bank, whose high-dimensional Gabor features are redundant and theglobal features representation capacity is poor. In order to extract the multi-orientationinformation and reduce the dimension of the features, two fusion rules are proposed to fusethe original Gabor features of the same scale into a single feature. At the same time, torepresent the global features effectively, the fused image is divided into severalnon-overlapping rectangular units, and the histogram of each unit is computed and combinedas facial expression features. Experimental results show that the method is effective for bothdimension reduction and recognition performance.
     Second, a novel illumination-robust facial expression recognition method is proposed by using symmetric bilinear model to overcome the disadvantage of traditional2-dimensionalimage illumination preprocessing methods that they can degrade the quality of input imageand worsen recognition performance. We separate the illumination and expressioninformation which in the facial expression image under unknown illumination through thebilinear model, and build the illumination subspace and expression subspace respectively inorder to analyze and process the illumination information and expression informationindependently. The illumination factors are separated from the training database and theexpression factor is separated from testing image with arbitrary illumination, then the testingimage is transformed into a number of expression images exhibiting different illumination oftraining database. Experimental results show that the proposed method is better than thetraditional illumination preprocessing methods in recognition performance.
     Third, a novel facial expression recognition method under partial occlusion is proposedbased on local Gabor features radial grid encoding strategy in order to extract effective localfeatures to represent facial expression robustly. It has been found, in neurophysiological andpsychovisual studies, that two neighboring cells (both in retina and visual cortex) usuallyhave overlapping receptive fields. Therefore, in our implementation, a facial expressionimage is first divided into several local blocks which have50%overlap, and then each blockis represented by multi-scale and multi-orientation Gabor features, the resulting Gabor featureare encoded using radial grids, imitating the structure of human visual cortex. The proposedfeatures extraction method has the advantage of Gabor filters, which can represent the texturefeatures effectively and can overcome the disadvantage of Gabor filters, whose outputs arehighly correlated with redundant information at neighboring pixels. Better recognition ratesare achieved in JAFFE database with eyes occlusion and mouth occlusion. Experimentalresults show that the proposed local features coding method is effective to facial expressionrecognition under partial occlusion.
     Forth, an expression classification method based on support vector machine with localsummation kernel is proposed to overcome the disadvantage of conventional support vectormachine with global kernel, which can not process local features and is not robust toocclusion. The proposed recognition method based on local features, however, is robust toocclusion because partial occlusion affects only specific local features. In order to processlocal features in support vector machine effectively, local kernels are applied to process localfeatures and the the summation of local kernels is used as the integration method. Theeffectiveness and robustness of the proposed method are validated by comparison with globalkernel based support vector machine. The recognition rate is high under large occlusion, whereas the recognition rate of global kernel based support vector machine decreasesdrastically.
     Finally, the main content of this dissertation is summarized, and the further researchesare discussed.
引文
[1]金辉,高文.基于HMM的面部表情图像序列的分析与识别[J].自动化学报,2002,28(4):646-650.
    [2] Mehrahian A. Communication without words [J]. Psychology Today,1968,2(4):53-56.
    [3] Lyons M, Akamatsu S, Kamachi M, et al. Coding facial expressions with Gabor wavelets.Proceedings of the3rd IEEE International Conference on Automatic Face and GestureRecognition [C]. Nara, Japan,1998:200-205.
    [4] Kanade T, Cohn J, Tian Y L. Comprehensive database for facial expression analysis.Proceedings of the4th IEEE International Conference on Automatic Face and GestureRecognition [C]. Grenoble, France,2000:46-53.
    [5] Lucey P, Cohn J, Kanade T, et al. The extended Cohn-Kanade dataset (CK+): A completedataset for action unite and emotion-specified expression. Proceedings of IEEEInternational Conference M&E [C]. Amsterdam, Netherlander,2005:317-321.
    [6]吴丹,林学訚.人脸表情视频数据库的设计与实现[J].计算机工程与应用,2004,(5):177-180.
    [7]薛雨丽,毛峡,张帆. BHU人脸表情数据库的设计与实现[J].北京航空航天学报,2007,33(2):224-228.
    [8] Belhumeur P N, Hespanha J P, Kriegman D J. Eigenfaces vs. fisherfaces recognitionusing class specific linear projection [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,1997,19(7):711-720.
    [9] Gao W, Cao B, Shan S G, et al. The CAS-PEAL large-scale chinese face database andbaseline evaluations [J]. IEEE Transacations on Systems,Man and Cybernetics, Part A,2008,38(1):149-161.
    [10]Sim T, Baker S, Bsat M. The CMU pose, illumination, and expression (PIE) database [A].Proceedings of the IEEE International Conference on Automatic Face and GestureRecognition [C], Washington, DC, USA,2002:46-51.
    [11]高文,金辉.面部表情图像的分析与识别[J].计算机学报,1997,20(9):782-790.
    [12]Pantic M, Rothkrantz L. Facial action recognition for facial expression analysis fromstatic face images [J]. IEEE Transactions on Systems, Man and Cyberneties, Part B,2004,34(3):1449-1461.
    [13]Bourel F, Chibelushi C C, Low A A. Robust faeial expression reeognition usingasate-based model of spatially-loealized faeial dynamies. Proeeedings of5th IEEEInternationa1Conference on Automatic Face and Gesture Recognition[C], Washington,D.C., USA: IEEE Computer Society,2002:113-118.
    [14]Cootes T F, Edwards G J, Taylor C J. Aetive appearance models [J]. IEEE Transactionson Pattern Analysis and Maehine Intelligenee,2001,23(6):681-685.
    [15]左坤隆,刘文耀.基于活动外观模型的人脸表情分析与识别[J].光电子·激光,2004,15(7):853-857.
    [16]Tang F Q, Deng B. Facial expression recognition using AAM and local facial features.Proceedings of the Third International Conference on Natural Computation [C], Haikou,China,2007:632-635.
    [17]Cristinacce D, Cootes T, Scott I. A multi-stage approach to facial feature detection.Proceedings of the15th British Machine Vision Conference [C], London, UK,2004:277-286.
    [18]Cootes T F, Taylor C J. Active Shape Models-‘Smart Snakes’. Proceedings of the BritishMachine Vision Conference [C].1992:266-275.
    [19]Huang C L, Huang Y M. Facial expression recognition using model-based featureextraetion and action parameters elassification[J]. Journal of Visual Communication andImage Representation,1997,8(3):278-290.
    [20]Kobayashi H, Hara F. Facial interaction between animated3D face robot and humanbeings. Proeeedings of the International Conference on Systems, Man, Cyberneties [C],Washington, D.C. USA: IEEE Computer Soeiety,1997,(4):3732-3737.
    [21]Kass M, Witkin A, Terzopoulos D. Snakes: active eontour models[J]. IntemationalJoumal of Computer Vision,1988,(l):321-331.
    [22]李培华,张田文.主动轮廓线模型(蛇模型)综述[J].软件学报,2000,11(6):751-757.
    [23]Lee T S. Image representation using2D Gabor wavelets[J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,1996,18(10):959-971.
    [24]姚伟,孙正兴,张岩.面向脸部表情识别的Gabor特征选择方法[J].计算机辅助设计与图形学学报,2008,20(1):79-84.
    [25]刘晓旻,章毓晋.基于Gabor直方图特征和MVBoost的人脸表情识别[J].计算机研究与发展,2007,44(7):1089-1096.
    [26]Littlewort G, Bartlett M S, Fasel I, et al. Dynamics of facial expression extractedautomatieally from video [J]. Image and Vision Computing,2006,24(6):615-625.
    [27]邓洪波,金连文.一种基于局部Gabor滤波器组及PCA+LDA的人脸表情识别方法[J].中国图象图形学报,2007,12(2):322-329.
    [28]Tian Y L, Kanade T, Cohn J. Evaluation of Gabor-wavelat-based facial action unitrecognition in image sequence of increasing compexity. Proceedings of IEEEInternational Conference on Automatic Face and Gesture Recognition [C], Washington,DC,2002.
    [29]Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invarianttexture classification with local binary patterns [J]. Proceedings of IEEE Transaction onPattern Analysis and Machine Intelligence,2002,24(7):971-978.
    [30]Feng X. Facial expression recognition based on local binary patterns and coarse-to-fineclassification, Proceedings of International Conference on Computer and InformationTechnology [C],2004,178-183.
    [31]Zhao G, Pietikainen M. Dynamic texture recongnition using local binary patterns with anapplication to facial expression [J]. IEEE Transactions on Pattern Recogniton andMachine Intelligence,2007,29(6):915-928.
    [32]孙宁,冀贞海,邹采荣,赵力.基于2维偏最小二乘法的图像局部特征提取及其在面部表情识别中的应用[J].中国图象图形学报,2007,12(5):847-853.
    [33]付晓峰,韦巍.基于高级局部二元模式直方图映射的表情识别[J].模式识别与人工智能,2009,22(1):123-128.
    [34]付晓峰,韦巍.基于多尺度中心化二值模式的人脸表情识别[J].控制理论与应用,2009,26(6):629-633.
    [35]刘松,应自炉.基于局部特征和整体特征融合的面部表情识别[J].计算机应用,2005,(3):4-6.
    [36]Zhang Y, Ji Q. Active and dynamic information fusion for facial expression understanding from image sequences [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2005,27(5):699-714.
    [37]Chang Y, Hu C, Turk M. Probabilistic expression analysis on manifolds. Proceedings ofInternational Conference on Computer Vision and Pattern Recognition [C], WashingtonDC, USA,2004,2:520-527.
    [38]Xie X D, Lam K M. Facial expression recognition based on shape and texture [J]. PatternRecogniton,2009,42:1003-1011.
    [39]Liao S, Fan W, Chunga C S, et al. Facial expression recognition using advanced localbinary patterns, Tsallis entropies and global appearance features. Proceedings of IEEEInternational Conference on Image Processing [C], Atlanta, GA, USA,2006:665-668.
    [40]Kotsia I, Zafeiriou S, Pitas L. Texture and shape information fusion for facial expressionand facial action unit recognition [J]. Pattern Recognition,2008,41(3):833-851.
    [41]Horn B, Schunck B. Determining Optical Flow [J]. Artifical Intelligenee,1981,17(1-3):185-203.
    [42]金辉,高文.基于特征流的面部表情运动分析及应用[J].软件学报,2003,14(12):2098-2105.
    [43]Anderson K, Mcowan P. A Real-time automated System for the recogniton of humanfacial expression [J]. IEEE Trasactions on SMC-Part B: Cybernetics,2006,36(1):96-105.
    [44]杨国亮.人工心理相关技术研究-面部表情识别与情感建模[D].北京:北京科技大学,2006.
    [45]Bourel F, Chibelushi C C, Low A. A robust facial expression recognition using astate-based model of spatially-localized facial dynamics. Proceedings of IEEEInternational Conference on Automatic Face and Gesture Recognition [C]. Washington,DC, USA:2002:106-111.
    [46]Zhang Y, Ji Q. Active and dynamic information fusion for facial expressionunderstanding from image sequences [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2005,27(5):699-714.
    [47]陈伟宏.基于特征点跟踪的面部表情强度度量[J].重庆工学院学报(自然科学版),2008,22(9):176-180.
    [48]Lien J J, Kanade T, Cohn J F, Li C. Automatic facial expression recognition based onFACS action untis. Proceedings of IEEE International Conference on Automatic Faceand Guesture Recognition[C],1998,390-395.
    [49]Braathen B, Bartlett M S, Littlewort G, Smith E, Movellan J R. An approach to automaticrecognition of spontaneous facial actions. Proceedings of IEEE International Conferenceon Automatic Face and Guesture Recognition[C],2002,345-350.
    [50]Huang X, Zhang S, Wang Y, Metaxas D, Samaras D. A hierarchical framework for highresolution facial expression tracking. Proceedings of Conference on Computer Visionand Pattern Recogniton Workshop[C],2004,220-225.
    [51]张家树,陈辉,李德芳,罗小宾,夏小东.人脸表情自动识别技术研究发展[J].西南交通大学,2005,40(3):285-292.
    [52]Tslakanidou F, Malassiotis S. Real-time2D+3D facial action and expression andexpression recognition [J]. Pattern Recognition,2010,43(5):1763-1775.
    [53]Wang J, Yin L J. Static topographic modeling for facial expression recognition andanalysis [J]. Computer Vision and Image Understanding,2007,10(8):19-34.
    [54]Sung J, Kim D. Pose-robust facial expression recognition using view-based2D+3DAAM [J]. IEEE Transactions SMC-Part A: Systems and Humans,2008,38(4):852-866.
    [55]Cheon Y, Kim D. Natural facial expression recognition recognition usingdifferenial-AAM and manifold learning [J]. Pattern Recognition,2009,42:1340-1350.
    [56]赵浩,吴小俊.基于改进联合模型的人脸表情识别[J].计算机工程,2010,36(6):206-209.
    [57]Ma L, Khorasani K. Facial Expression Recognition Using Constructive FeedforwardNeural Networks [J]. IEEE Transactions on Systems Man and Cybernetics,2004,34(3):1588-1595.
    [58]Soyel H, Demirel H. Facial expression recognition using3D facial feature distances.Proceedings of the4th International Conference on Image Analysis and Recognition [C],Montreal, Canada,2007:831-838.
    [59]Gueorguieva N, Georgiev G, Valova I. Facial expression recognition using feedforwardneural networks. Proceeding of the International Conference on Artificial Intelligence [C],Las Vegas, N V, USA,2003:285-291.
    [60]周书仁,梁昔明,杨秋芬.类间学习神经网络的人脸表情识别[J].计算机应用研究,2008,25(7):2219-2222.
    [61]Sebe N, Cohen I, Garg A, et al. Emotion recognition using a Cauchy naive BayesClassifier [A]. Proceedings of International Conference on Pattern Recognition [C],Québec City, Canada,2002,(1):17-20.
    [62]Cohen I, Sebe N, Garg A, et al. Facial expression recognition from video sequences:Temporal and staticmodeling [J]. Computer Vision and Image Understanding,2003,91(1-2):160-187.
    [63]Cohen I, Sebe N, Cozman F G, et al. Learning bayesian network classifiers for facialexpression recognition with both labeled and unlabeled data. In: Proceedings ofInternational Conference on Computer Vision and Pattern Recognition [C], Madison,Wisconsin, USA,2003,(1):595-604.
    [64]Zhang Y, Ji Q. Active and dynamic information fusion for facial expressionunderstanding from image sequences [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2005,27(5):699-714.
    [65]朴林植,赵杰煜.基于统计形状分析的人脸基本表情分析[J].宁波大学学报(理工版),2009,22(2):196-210.
    [66]Littlewort G, Bartlett M S, Fasel I, et al. Analysis of machine learning methods forreal-time recognition of facial expressions from video [R]. Machine PerceptionLaboratory Institute for Neural Computation, University of California, San Diego, CA,USA,2003.
    [67]Xu Q Z, Zhang P Z, YangL X, et al. A facial expression recognition approach based onnovel support vector machine tree [A]. Proceedings of the4th International Symposiumon Neural Networks [C], Nanjing, China,2007:374-381.
    [68]Bartlett M S, Littlewort G, Frank M, et al. Recognizing facial expression: machinelearning and application to spontaneous behavior. Proceedings of IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition [C], San Diego, CA,USA,2005,(2):568-573.
    [69]孙雯玉.人脸表情识别算法研究[D].北京:北京交通大学,2007.
    [70]Muller S, Wallhoff F, Hulsken F, et al. Facial expression recognition using pseudo3-Dhidden Markov models. Proceedings of International Conference on Pattern Recognition
    [C], Québec City, Canada,2002,2:32-35.
    [71]Yeasin M, Bullot B, Sharma R. From facial expression to level of interest: aspatio-temporal approach. Proceedings of International Conference on Computer Visionand Pattern Recognition [C], Washington, DC, USA,2004,2:922-927.
    [72]Bourel F, Chibelushi C C, Low A A. Recognition of facial expressions in the presence ofocclusion. Proceedings of the12th British Machine Vision Conference [C], Manchester,UK,2001:213-222.
    [73]Gross R, Matthews I, Baker S. Active appearance models with occlusion [J]. Image andVision Computing,2006,24(6):593-604.
    [74]Kotsia I, Pitas I, Zafeirious S, et al. Novel multiclass classifiers based on theminimization of the within-class variance [J]. IEEE Transactions on Neural Networks,2009,20(1):14-34.
    [75]中野真里,池添史隆,田伏正佳等.発声時の温度顔画像からの表情認識の効率化と性能への個人差の影響に関する検討[J].画像電子学会誌,2009,38:156-163.
    [76]Zhang L G, Tjondronegoro, Dian W, Chandran, Vinod. Toward a more robust facialexpression recognition in occluded images using randomly sampled Gabor basedtemplates. Proceedings of the IEEE International Conference on Multimedia andExposition,2011, Barcelona,1-6.
    [77]薛雨丽,毛峡, Caleanu C D.遮挡条件下的鲁棒表情识别[J].北京航空航天大学学报,2010,36(4):429-433.
    [78]Buciu I, Kotsia I, Pitas I. Facial expression analysis under partial occlusion. Proceedingsof the IEEE International Conference on Acoustics, Speech, and Signal Processing [C],Philadelphia, PA, USA,2005, V:453-456.
    [79]Towner H, Slater M. Reconstruction and recognition of occluded facial exp ressionsusing PCA. Proceedings of the Second International Conference on Affective Computingand Intelligent Interaction [C], Lisbon, Portugal,2007:36-47.
    [80]刘晓旻.人机交互界面中的人脸表情识别[D].北京:清华大学,2006.
    [81]Hong J W, Song K T. Facial expression recognition under illumination variation[J].Advanced Robotics and Its Social Impacts,2007,1-6.
    [82]Li H, Buenaposada J M, Baumela L. Real-time facial expression withillumination-corrected image sequences. Proceedings of the IEEE InternationalConference on Automatic Face and Gesture Recognition,2008,1-6.
    [83]Lajevardi S M, Hussain Z M. Higher order orthogonal moments for invariant facialexpression recogniton [J]. Digital Signal Processing,2010,20(6):1771-1779.
    [84]Black M J, Yacoob Y. Recognizing facial expressions in image sequences using localparameterized models of image motion [J]. International Journal of Computer Vision,1997,25(1):23-48.
    [85]Gokturk S B, Bouguet J Y, Tomasi C, et al. Model-based face tracking forview-independent facial expression recognition. Proceedings of the5th IEEEInternational Conference on Automatic Face and Gesture Recognition [C], Washington,DC, USA,2002:287-293.
    [86]Sung J, Kim D. Pose-robust facial expression recognition using view-based2D+3DAAM[J]. IEEE Transactions SMC-Part A: System and Humans,2008,38(4):852-866.
    [87]森博章,宮脇健三郎,佐野睦夫等.コミュニケーションを円滑に進めるための表情変化検出方式の検討[R].電子情報通信学会技術研究報告,2008:159-162.
    [88]Tong Y, Chen J X, Ji Qiang. A Unified Probablistic Framework for Spontaneous facialaction modeling and understanding [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2010,32(2):258-273.
    [89]Ebine H, Nakamura O. The recognition of facial expressions considering the differencebetween individuality [J]. Transactions of the Institute of Electrical Engineers of Japan,1999,119-C (4):474-481.
    [90]Matsugu M, Mori K, Mitari Y, et al. Facial expression recognition combined with robustface detection in a convolutional neural network. Proceedings of the International JointConference on Neural Networks [C], Portland, OR, USA,2003:2243-2246.
    [91]Shan C F, Gong S G, Mcowan P W. Robust facial expression recognition using localbinary patterns. Proceedings of the International Conference on Image Processing [C],Genoa, Italy,2005:2225-2228.
    [92]Tan H C, Zhang Y J, Chen H, et al. Person-independent expression recognition based onperson-similarity weighted expression feature [J]. Journal of Systems Engineering andElectronics,2010,21(1):118-126.
    [93]Sorci M, Antonini G, Cruz J. Modelling human perception of static facial expressions [J].Image and Vision Computiong,2010,28(5):790-806.
    [94]Donato G, Bartlett M S, Hager J C, Ekman P, Sejnowski T J. Classifying facial actions [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1999,21(10):947-989.
    [95]Zhang Z Y, Lyons M, Schuster M, Akamatsu S. Comparison between geometry-basedand Gabor-wavelets-based facial expression recognition using multi-layer perceptron.Proceedings of the3rd IEEE International Conference on Automatic Face and GestureRecognition [C]. Nara, Japan: IEEE,1998.454-459.
    [96]Wen Z, Huang T S. Capturing subtle facial motions in3D face tracking. Proceedings ofthe9th IEEE International Conference on Computer Vision [C]. Nice, France: IEEE,2003.1343-1350.
    [97]Yu J G, Bhanu B. Evolutionary feature synthesis for facial expression recognition [J].Pattern Recognition Letters,2006,27(1):1289-1298.
    [98]Zhang B C, Shan S G, Chen X L, Gao W. Histogram of Gabor phase patterns (HGPP): anovel object representation approach for face recognition [J]. IEEE Transactions onImage Processing,2007,16(1):57-68.
    [99]Daugrnan J. Uneertainty relation for resolution in space, spatial frequeney andorientation optimized by two-dimensional visual cortieal filters [J]. Joumal of the OptiealSoeiety of Ameriea A.1985,(2):1160-1169.
    [100]Campell F W, Robson J G. Applieation of Fourier analysis to the visibility of gratings [J].Physiology.1968,(197):551-556.
    [101]Lee T S. Image representation using2D Gabor wavelets [J]. IEEE Transactions onPattern Analysis and Machine Intelligence,1996,18(10):959-971.
    [102]Shan S G, Gao W, Cao B, et al. Illumination normalization for robust face recognitionagainst varying illumination conditions. Proceedings of the IEEE InternationalWorkshop on Analysis and Modeling of Faces and Gestures [C]. Washington DC, USA:IEEE Computer Society.2003:157-164.
    [103]龚婷,胡同森,田贤忠.基于类内分块PCA方法的人脸表情识别.机电工程[J],2009,26(7):74-76.
    [104]Lei Z, Liao S, Pietikainen M, Li S Z. Face recognition by exploring information jointlyin space, scale and orientation. Proceedings of the IEEE Transactions on ImageProcessing [C],2011,20(1):247-256.
    [105]Kim S K, Park Y J, Toh K A, Lee S. SVM-based feature extraction for face recognition[J]. Pattern Recognition,2010,43(8):2871-2881.
    [106]Georghiades A S, Belhumeur P N, Kriegman D J. From few to many: Illumination conemodels for face recognition under variable lighting and pose [J]. IEEE Transactions onPattern Analysis and Machine Intelligence,2001,23(6):643-660.
    [107]Zhang L, Samaras D. Face recognition under variable lighting using harmonic imageexemplars. Proceedings of the2003IEEE Computer Society Conference on ComputerVision and Pattern Recognition [C], Los Alamitos, CA, USA.2003:1-19.
    [108]Lanitis A, Taylor C J, Cootes T F. Automatic face identification system using flexibleappearance models [J]. Image and Vision Computing1995,13(12):393-401.
    [109]王海涛,刘俊,王阳生.自商图像[J].计算机工程.2005,31(18):178-179.
    [110]Chen T, Yin W, Zhou X S, et al. Illumination normalization for face recognition anduneven background correction using total variation based image models. Proceedings ofthe IEEE International Conference on Computer Vision and Pattern Recognition [C],San Diego, CA, USA,2005:532-539.
    [111]张熠,张桂林.基于总变分模型的光照不变人脸识别算法[J].中国图形图像学报,2009,12(2):208-213.
    [112]Tenenbaum J, Freeman W. Separating style and content with bilinear models [J]. NeuralComputer,2000,(12):1247-1283.
    [113]Abboud B, Davoine F. Appearance factorization based facial expression recognition andsynthesis. Proceedings of the International Conference on Pattern Recognition [C],2004,163-166.
    [114]Du Y, Lin X. Multi-view face image synthesis using factorization model. Proceedings ofthe HCI/ECCV,2004,200-201.
    [115]Lee H, Kim D. Facial expression transformation for expression invariant facerecognition. Proceedings of the International Symposium on Visual Computing.2006,323-333.
    [116]Grimes D, Rao R. A bilinear model for sparse coding, neural information processingsystems [J]. Neural Informontion Systems.2003(15):1287-1294.
    [117]Magnus J R, Neudecker H. Matrix differential calculus with applications in statisticsand econometrics [M]. Wiley.1988.
    [118]Shashua A, Riklin R T. The quotient image: class-based re-rendering and recognitionwith varying illumination [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2001(2):129-139.
    [119]刘帅师,田彦涛,万川.基于Gabor多方向特征融合与分块直方图的人脸表情识别方法[J].自动化学报,2011,37(12):1455-1463.
    [120]Pantic M, Rothkrantz L. Automatic analysis of facial expressions: the state of the art [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(12):1424-1445.
    [121]Leonardis A, Bischof H. Robust recognition using eigenimages [J]. Computer VisialImage Understanding.2000,(78):99-118.
    [122]Tarres F, Rama A. A novel method for face recognition under partial occlusion or facialexpression variations. Proceedings of the47th International Symposium MultimediaSystems and Applications.2005,1-4.
    [123]Martinez A M. Recognizing imprecisely localized, partially occluded, and expressionvariant faces from a single sample perclass [J]. IEEE Transactions Pattern AnalysisMachenice Intellengence,2002,24(6):748-763.
    [124]Li S Z, Hou X W, Zhang H J, Cheng Q S. Learning spatially localized part-basedrepresentation. Proceedings of the IEEE Conference on Computer Vision PatternRecognition [C],2001,207-212.
    [125]Lee D D, Seung H. S. Learning the parts of objects by non-negative matrix factorization[J]. Nature,1999,40(1):788-791.
    [126]Hyun J O, Kyoung M L, Sang U L. Occlusion invariant face recognition using selectivelocal non-negative matrix factorization basis images [J]. Image and Vision Computing,2008,26:1515-1523.
    [127]Irene K, Ioan B, Ioannis P. An analysis of facial expression recognition under partialfacial image occlusion [J]. Image and Vision Computing,2008,(26):1052-1067.
    [128]Zafeiriou S, Tefas A, Buciu I, Pitas I. Exploiting discriminant information innonnegative matrix factorization with application to frontal face verification [J]. IEEETransactions on Neural Networks,2006,17(3):683-695.
    [129]Bruce V, Young A W. Understanding face recognition [J]. Journal of the BritishPsychology,1986,77(3):305-327.
    [130]Hubel D, Wiesel T. Brain and Visual Perception [M], The Story of a25-YearCollaboration, Oxford, New York, USA,2005.
    [131]Jones J P, Palmer L A. An evaluation of the two-dimensional Gabor filter model ofsimple receptive fields in cat striate cortex [J]. Journal of Neurophy-Siology,1987,6:1233-1258.
    [132]Buciu I, Kotsia I, Pitas I. Facial expression analysis under partial occlusion. Proceedingsof the IEEE International Conference on Acoustics, Speech, and Signal Processing [C].Piscataway, N J, USA: IEEE,2005,(5):453-456.
    [133]Riesenhuber M, Poggio T. Hierarchcal models of object recognition in cortex. NatNeurosci,1999,2(11):1019-1025.
    [134]Hubel D. Eye, Brain and Vision [M]. New York, USA, Scientific American Library,1988.
    [135]Tootell R B, Silerman M S, Switkes E, De Valois R L. Deoxyglucose analysis ofretinotopic organization in primates [J]. Science,1982,21(8):902-904.
    [136]Haussler D. Convolution kernels on discrete structures [R]. Technical report, UCSCCRL-99-10,1999.
    [137]Lyons M J, Akamatsu S, Kamachi M, Gyoba J. Coding facial expressions with Gaborwavelets [C]. Proceedings of the3rd IEEE International Conference on Automatic Faceand Gesture Recognition,1998,200-205.
    [138]Connolly M, Essen V D. The representation of the visual field in parvocellular andmagnocellular layers of the lateral geniculate nucleus in the Macaque monkey [J].Journal of Comparative Neurology,1984,226(4):544-564.
    [139]Connolly M, Van Essen D. The representation of the visual field in parvocellular andmagnocellular layers of the lateral geniculate nucleus in the Macaque monkey [J].Journal of Comparative Neurology,1984,226(4):544-564.
    [140]Heisele B, Ho P, Poggio T. Face recognition with support vector machines: globalversus component-based approach. Proceedings of the International Conference onComputer Vision [C],2001,688-694.
    [141]Hotta K. A view-invariant face detection method based on local pcacells [J]. Journal ofAdvanced Computational Intelligence and Intelligent Informatics,2004,8(2):130-139.
    [142]Boughorbel S, Tarel J P, Fleuret F. Non-Mercer kernels for SVM object recognition [R].Proceedings of International Conference on the British Machine Vision Conference,2004.
    [143]Cristianini N, Taylor S J. An Introduction to Support Vector Machines [M]. CambridgeUniversity Press,2000.
    [144]The facial recognition technology (FERET) database. http://www.itl.nist.gov/iad/humanid/feret/feret_master.html.
    [145]Platt J. Sequential minimal optimization: a fast algorithm for training support vectormachines [R]. Technical report: MSR-TR-98-14, Redmond, WA: Microsoft Research,1998.
    [146]Burges Christopher J C. Atutorial on support vector machines for pattern recogntion [J].Knowledge Discovery and Data Mining,1998,2(2):121-167.
    [147]Debnath R, Takahashi H. Kernel selection for the support vector machine [J]. IEICETransactions on Information and Systems,2004, E87-D (12):2903-2904.
    [148]Shawe T J, Cristianini N. Kernel Methods for Pattern Analysis [M]. CambridgeUniversity Press, Cambridge,2004.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700