人脸认证中的光照正则化技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
目前,人脸认证对环境光照的影响有着难以克服的缺陷,这主要是由于光照变化的影响给人脸图像带来的变化甚至比人脸图像个体差异带来的变化大。另外,当人脸认证时的环境光照与注册时不同,人脸认证的识别性能也会急剧下降,不能满足实际系统的需要。为了增强实际系统对环境光照变化的鲁棒性,本论文提出一种基于各向异性扩散算法的多尺度人脸光照不变特征图像提取算法。其特点是针对人脸图像中的光照问题向传统各向异性扩散算法中引入新的区间不一致描述子,并提出新的传递系数以消除该算法中的图像光晕效应,进而形成新的各向异性扩散算法。通过将该算法嵌入广义商图像法中得到最终的光照不变特征图像的获取方法。本论文的主要研究内容概括如下:
     ①本论文深入研究传统各向异性扩散方法中用于描述灰度图像梯度的梯度描述子。针对该方法对梯度描述的不足引入两种经典梯度描述子:1)利用空间梯度对图像中的灰度梯度进行描述,再针对空间梯度描述子不能较好描述梯度方向这一缺陷进行改进。2)针对性的引入区间不一致描述子,用以加强对不同方向上的近邻区域不一致性的描述,从而得到梯度变化的方向。
     ②传统方法中的传递系数容易导致扩散图像边缘的锐化。虽说对于边缘提取来说这是一个好的特性,但对于处理图像中的光照变化却是有害无益。这将使得处理后的图像混入大量的噪声。本论文针对这一问题对该传递系数进行改进,从而大幅度的降低了处理后的图像噪声,使得新的各向异性扩散方法对光照变化的处理效果得以大大改善。
     ③传统的各向异性扩散方法的实现是一个扩散迭代的过程,且该方法是在没有任何约束条件下进行迭代计算的。如果直接用来处理人脸图像,将很容易造成过适应问题。为此,本论文在新的各向异性扩散方法中引入一种特定的扩散约束,使得新的各向异性扩散方法更加适合于处理人脸图像中的光照问题。
     ④对本论文提出的光照不变特征图像方法与其他主流的光照不变特征提取方法在标准人脸库Yale B和CMU PIE库进行了相应的对比实验。结果表明,本论文提出的改进算法具有较大的优越性。
     本论文所提出的算法可以在多尺度空间中有效地提取不随光照变化的人脸结构特征图像,不需复杂的光照变化建模,且对训练样本无特殊要求。该算法在低频光照域上具有很好的边缘保持能力,即使在光照变化很大的条件下也能获得良好的处理效果。
Currently, it is difficult for face verification to overcome the impact of the ambient illumination. This is because the variation of face image, brought by the ambient illumination, is even bigger than the variation between the face images from different bodies. Meanwhile, the performance of the face verification system will dramatically drop if the ambient illumination of the verification process is different from that of the registration process. To enhance robustness of the system to illumination changes,a multiscale illumination invariant facial feature image extraction method is proposed in this thesis? based on anisotropic diffusion. To eliminate the halo effect of anisotropic diffusion algorithm, a novel anisotropic diffusion algorithm is proposed by defining new local inhomogeneity and conduction function. The final illumination invariant facial feature image extraction method is acquired by embedding the algorithm in the generalized quotient image method. The main research works in this thesis are as follows:
     ①This thesis studies the gradient descriptor of the traditional anisotropic diffusion method which is used to describe the gradient of gray face image. Two kinds of gradient descriptors are introduced in this method for the drawback of its gradient describing. Spatial gradient is used to describe gradient of gray image, then the method improves the spatial gradient aiming at its drawback in describing gradient direction. To enhance the description of the inconsistency between neighboring areas, a new local inhomogeneity is introduced in the proposed method,and then gradient direction is obtained.
     ②Conduction function in the traditional method may easily lead to sharpening of the edge of diffusion image. Although it is a good feature for edge extraction, it is detrimental to process the illumination changes of face image. And it will make the processed image mixed with a lot of noise. This thesis improves the transfer function for this situation, and thereby substantially reduces noise of the processed image. It significantly improves the treatment effect of the proposed method.
     ③The implementation of traditional anisotropic diffusion is an iterative diffusion process, and the method is carried out without any iterative constraints. It is very likely to cause the overfit of the method if we directly use the method to process face image. This thesis introduces a specific diffusion constraint in the new anisotropic diffusion, which can make the method more suitable for processing illumination problem of face image.
     ④To validate the superiority of the proposed method relative to other major illumination invariant feature image extract method, a large number of comparative experiments on Yale B and CMU PIE face database verified the validity of the proposed algorithm.
     ⑤Comparative experiments of the proposed method relative to other major illumination invariant feature image extract method on Yale B and CMU PIE face database was conducted, and the results show the superiority of the proposed method.
     The proposed method, which has no modeling steps or training images required, is able to extract illumination invariant facial feature image from multiscale space. Meanwhile, the proposed method can preserve edge information in low frequency illumination fields, and can achieve promising results even under harsh illumination conditions.
引文
[1]伍德沃德著.陈菊明译.生物认证[M].北京:清华大学出版社, 2004.
    [2]张翠平,苏光大.人脸识别技术综述[J].中国图象图形学报, 2000, 5(A)(11):885~894.
    [3] R. Chellappa, C. L. Wilson, S. Sirohey. Human and machine recognition of faces: a survey [C]. Proceedings of the IEEE, 1995, 83(5):705-740.
    [4] P. J. Phillips, P. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi, M. Bone. FRVT 2002: overview and summary [EB]. http://www.frvt.org, March 2003.
    [5]山世光.人脸识别中若干关键问题的研究[D].中国科学院:2004.
    [6] Y.Moses,Y.Adini,and S.Ullman.Face Recognition:The problem of compensating for changes in illumination direction [J]. IEEE PAMI, 1997, 19:721-732.
    [7] D. Shin,H. Lee,and D. Kim. Illumination-robust face recognition using ridge regressive bilinear models [J]. Pattern Recognition Letters, 2008, 29: 49-58.
    [8] P. J. Phillips, H. Moon, S. A. Rizvi, P. J. Rauss. The FERET evaluation methodology for face recognition algorithms [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(10):1090-1104.
    [9] Face Recognition Vendor Test(FRVT), 2002, JONATHON PHILLIPS, PATRICK GROTHER,ROSS J. MICHEALS.
    [10]李武军,王崇骏,张炜,陈世福.人脸识别研究综述[J].模式识别与人工智能, 2006, 19(1): 58-66.
    [11] W. Zhao, R. Chellappa, P. J. Phillips, A. Rosenfeld. Face recognition: a literature survey [J]. ACM Computing Surveys, 2003, 35(4):399-458.
    [12]龚卫国,杨利平,辜小花,李伟红.基于多级小波分解的人脸图像光照补偿方法[J].光学精密工程, 2008, 16(8):1459-1464.
    [13]孙雪梅,苏菲,蔡安妮.基于微观本义复原的人脸图像光照正规化[J].光学学报, 2008, 28(11):2083-2089.
    [14]聂祥飞,谭泽富,郭军.小波包变换在人脸识别光照补偿中的应用[J].光电工程, 2008, 35(1):50-54.
    [15]聂祥飞,谭泽富,郭军.应用小波变换的人脸光照补偿[J].光学精密工程, 2008, 16(1):150-155.
    [16]胡步发,郝广涛.基于黎曼张量的人脸图像多模态分解光照建模方法[J].中国图像图形学报, 2009, 14(2):221-226.
    [17] P. J. Phillips, H. Moon, S. A. Rizvi, P. J. Rauss. The FERET evaluation methodology for face recognition algorithms [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(10):1090-1104.
    [18] K. W. Bowyer, K. Chang, and P. Flynn. A survey of approaches to three-dimensional face recognition [J]. ICPR, 2004, 1:358-361.
    [19] S. G. Kong, J. Heo, B. Abidi, J. Paik, and M. Abidi. Recent advances in visual and infrared face recognition-a review [J]. CVIU, 2004, 97:103-135.
    [20] Z. Pan, G. Healey, M. Prasad, and B. Tromberg. Face recognition in hyperspectral images [J]. IEEE PAMI, 2003, 25:1552-1560.
    [21] S. Luuk. Fast and Accurate 3D Face Recognition - Using Registration to an Intrinsic Coordinate System and Fusion of Multiple Region Classifiers [J]. IJCV, 2011:1-26.
    [22] K. W. Bowyer, K. Chang, and P. Flynn. A survey of approaches and challenges in 3d and multi-modal 3d+2d face recognition [J]. CVIU, 2006, 101:1-15.
    [23] A. Scheenstra, A. Ruifork, and R. C. Veltkamp. A survey of 3D face recognition methods [J]. Int'l Conf. on AVBPA, 2005, 3546:891-899.
    [24] K. C. Chang, K. W. Bowyer, and P. J. Flynn. An evaluation of multimodal 2d+3d face biometrics [J]. IEEE PAMI, 2005, 27(4):619-624.
    [25] J. Kittler, A. Hilton, M. Hamouz, and J. Illingworth. 3d assisted face recognition: A survey of 3d imaging, modelling and recognition approaches [C]. IEEE CVPR, 2005.
    [26] C. Q. He, G. D. Liu, Z. H. Xie. Infrared face recognition based on blood perfusion and weighted block-DCT in wavelet domain [C]. ICCIS, 2010:283-287.
    [27] X. Chen, P. J. Flynn, and K. W. Bowyer. Visible-light and infrared face recognition [J]. Workshop on Multimodal User Authentication, 2003:48-55.
    [28] D. Socolinsky and A. Selinger. Thermal face recognition in an operational scenario [C]. IEEE CVPR, 2004, 2:1012-1019.
    [29] Stan Z. Li, R. Chu, S. Liao, and L. Zhang. Illumination invariant face recognition using near-infrared images [J]. IEEE Trans. PAMI, 2007, 29(4):627-639.
    [30] W. Hizem, E. Krichen, Y Ni, B. Dorizzi, and S. Garcia-Salicetti. Specific sensors for face recognition [C]. IAPR Int'l Conf on Biometrics, 2006, 3832:47-54.
    [31] P. Belhumeur, J. Hespanha, and D. Kriegman. Eigenfaces vs fisherfaces Recognition using class specific linear projection [J]. IEEE PAMI, 1997, 19:711-720.
    [32] A. U. Batur and M. H. III. Hayes. Linear subspaces for illumination robust face recognition [C]. IEEE CVPR, 2001, 2:296-301.
    [33] H. Shim, J. Luo, T. Chen. A subspace model-based approach to face relighting under unknown lighting and poses [J]. IEEE IMAGE PROCESSING, 2008, 17(8):1331-1341.
    [34] P. Belhumeur and D. Kriegman. What is the set of images of an object under all possible illumination conditions[C]. IJCV, 1998, 28(3):245-260.
    [35] A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose [J]. IEEE Trans. Pattern Anal. Mach. Intelligence, 2001, 23(6):643-660.
    [36] R. Basri and D. Jacobs. Lambertian reflectance and linear subspaces[C]. IEEE ICCV, 2001, 2:383-390.
    [37] R. Ramamoorthi and P. Hanrahan. On the relationship between radiance and irradiance: Determining the illumination from images of a convex lambertain object [J]. Journal of Optical Society of American, 2001, 18(10):2448-2459.
    [38] K. C. Lee, J. Ho, and D. Kriegman. Nine points of lights: Acquiring subspaces for face recognition under variable lighting[C]. IEEE CVPR, 2001, 1:519-526.
    [39] R. Gonzalez and R. Woods. Digital Image Processing [M]. Prentice Hall, second edition, 1992.
    [40] S. Shan, W. Gao, B. Cao, and D. Zhao. Illumination normalization for robust face recognition against varying lighting conditions [J]. IEEE workshop on AMFG, 2003.
    [41] W. Chen, M.J. Er, and S. Wu. Illumination compensation and normalization for robust face recognition using discrete cosine transform in logrithm domain [J]. IEEE SMC-B, 2006, 36(2): 458-466.
    [42] Shan Du and R. Ward. Wavelet-based illumination normalization for face recognition[C]. IEEE ICIP, 2005, 2:954-957.
    [43] X. Xie and K. Lam. An efficient illumination normalization method for face recognition [J]. Pattern Recognition Letters, 27, 2005, 27(6):609-617.
    [44] Y. Gao and M. Leung. Face recognition using line edge map [J]. IEEE PAMI, 2002, 24(6):764-779.
    [45] S. Wei and S. Lai. Robust face recognition under lighting variations[C]. ICPR, 2004, 1:354-357.
    [46] C. Yang, S. Lai, and L. Chang. Robust face matching under different lighting conditions [J]. EURASIP, 2004, 2:149-152.
    [47] T. Zhang, Y. Y. Tang, B. Fang, Z. Shang, and X. Liu. Face recognition under varying illumination using gradientfaces [J]. IEEE Image processing, 2009, 18(11):2599-2606.
    [48] W. Zhao and R. Chellappa. Illumination-insensitive face recognition using symmetric shape from shading [J]. IEEE Computer Society, 2000, 4:286-293.
    [49] A. Shashua and T. Riklin-Raviv. The quotient image: class-based re-rendering and recognition with varying illuminations [J]. IEEE PAMI, 2001, 23(2):129-139.
    [50] S. Shan, W. Gao, B. Cao, and D. Zhao. Illumination normalization for robust face recognition against varying lighting conditions[C]. IEEE workshop on AMFG, 2003.
    [51] D. J. Jobson, Z. Rahman, and G. A. Woodel. Properties and performance of a center/ surround retinex [J]. IEEE Image Processing: special issue on color processing, 1996, 6(3):451-462.
    [52] D. J. Jobson, Z. Rahman, and G. A. Woodel. A multiscale retinex for bridging the gap between color images and the human observation of scences[J]. IEEE Image Processing, 1997, 6(7):965-976.
    [53] H. Wang, S. Li, and Y Wang. Face recognition under varying lighting condition using self quotient image[C]. IEEE AFGR, 2004:819-824.
    [54] R. Gross and V. Brajovic. An image pre-processing algorithm for illumination invariant face recognition[C]. 4th Int'l Conf on AVBPA, 2003.
    [55] T. Chen, W. Yin, X. S. Zhou, D. Comaniciu, and T. S. Huang. Total Variation Models for Variable Lighting Face Recognition [J]. IEEE PAMI, 2006, 28(9):1519-1524.
    [56] Taiping Zhang, Bin Fang. Multiscale facial structure representation for face recognition under varying illumination [J], Pattern Recognition, 2009(42):251-258.
    [57] J. Ruiz-del-Solar, J. Quinteros. Illumination compensation and normalization in eigenspace based face recognition: A comparative study of different pre-processing approaches [J]. Pattern Recognition Letters, 2008, 29:1966-1979.
    [58] Stan Z. Li, R. Chu, S. Liao, and L. Zhang. Illumination invariant face recognition using near-infrared images [J]. IEEE PAMI, 2007, 29(4):627-639.
    [59] Z. Xuan, J. Kittler, K. Messer. Illumination Invariant Face Recognition: A Survey[C]. IEEE BTAS, 2007.
    [60] S.D.Wei,S.H.Lai.Robust Face Recognition under Lighting Variations[J].ICPR, 2004,1(1):354-357.
    [61] D. P. Huttenlocher, W. J. Rucklidge. Comparing images using the Hausdorff Distance[C]. CVPR, 1993:705-706.
    [62] L. Nanni, D. Maio. Weighted Sub-Gabor for face recognition [J]. Pattern Recognition Letters, 2007, 28:487-492.
    [63] T. Ojala, M. Pietikainen. Multi-resolution Gray-scale and Rotation Invariant Texture Classification with Local Binary Patterns [J]. IEEE PAMI, 2002, 24(7):971-987.
    [64] R. Rammamorthi, P. Hanrahan. A signal- processing framework for inverse rendering [J]. ACM SIGGRAPH, 2001, 117-128.
    [65] K. Chen. Adaptive smoothing via Contextual and Local Discontinuities [J], IEEE Trans. PAMI, 2005, 27(10):1552-1567.
    [66] C. C. Liu, C. Q. Dai. Face recognition using dual-tree complex wavelet features [J]. IEEE Trans. Image Processing, 2009, 18(11):2593-2599.
    [67] J. P. Ye, Q. Li. A two-stage linear discriminant analysis via QR-decomposition [J]. IEEE Trans. PAMI, 2005, 27(6):929-941.
    [68] S. C. Yan, D. Xu, X. O. Tang. Face verification with balanced thresholds [J]. IEEE Trans. Image Processing, 2007, 16(1):262-268.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700