人脸图像的自适应美化与渲染研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
人脸图像美化与渲染是新兴的计算摄影学领域中的研究热点。它不但关注图像品质的提升,而且更关注对人脸图像的某种内容或属性的处理,如对人脸皮肤的亮度、光滑度或颜色的增强等。它希冀经其处理的图像能符合人们视觉感知习惯和具吸引人的视觉效果,能具扩展传统的摄影与图像处理系统的能力。它无论在如摄影、数字娱乐等日常生活中,还是在如广告设计、电影制作等专业领域中都有着广泛的应用前景。
     目前,使用现有的人脸图像工具来处理人脸美化与渲染,一般存在着需繁复的人工操作,便利性与效率性不高,且受到人的视觉感知能力与专业技能的限制等欠缺。在人脸图像的美化与渲染时,也因任务、目标各异而出现涉及多类型模型的构建与使用,且当前仍没有一种成熟的理论能统一地描述与分析相关模型间的关系与特性等问题。此外,人脸图像还具有因受光照、姿态与背景等多种因素影响而显示出表观变化十分复杂的特点。针对上述情况与问题,本文运用了计算机视觉、图像处理领域的前沿理论与方法,以及统计学习、偏微分方程、变分理论和数值分析等分析工具,吸收了认知心理学、社会心理学和艺术学等学科的相关研究成果,对人脸图像美化与渲染中相关的边缘保持平滑与编辑传播模型进行了分析,通过引入控制理论的自适应思想,构建了能根据图像与任务特点自动调节自身特性的自适应边缘保持能量最小化模型,以提升技术框架的效率性、准确性、容错性和稳定性。本文还以该模型为理论与技术基础,分别对人脸图像的美化、真实感渲染与非真实感渲染技术中的人脸皮肤美化、光照迁移与水墨画特色渲染等问题进行了系统的研究,有针对性地解决上述问题,且取得了较好的成效。本文的研究与工作,为今后进一步深化图像理解研究和实现更复杂的图像编辑或渲染奠定了良好的基础,亦为构建更智能化的图像后期处理系统或工具提供了技术支撑。本文的主要工作和创新为以下几个方面:
     第一、本文提出了一个自适应边缘保持能量最小化模型,它不但为实现自适应的边缘保持平滑滤波和编辑传播提供了有力的理论支撑,而且也为构建自适应的人脸美化与渲染系统或框架提供了技术基础和解决问题的方法。本文通过对与本研究相关的一些具有代表性的图像处理模型进行梳理,运用计算机视觉、图像处理领域的前沿理论与方法,以及非参数点估计、变分法和非线性滤波等分析工具,对这些相关模型的结构、输入特征、输出特性以及参数设定等进行了深入地探讨。为整合这些相关模型的功效,本文提出和构建了具一般性的边缘保持能量最小化模型(简称“一般边缘保持模型”),其既包含了上述相关模型的功效,又具备了人脸图像美化与渲染所要求的基本功能,还为进一步构建新的具边缘保持平滑滤波和编辑传播功能的模型提供了理论基础。在此基础上,通过构造或设置自适应的数据项权重、模型参数和导向特征空间,本文又构建了一个称之为自适应边缘保持能量最小化模型(简称“自适应边缘保持模型”)。这一模型不但比“一般边缘保持模型”有更好的稳定性、容错性、扩展性和灵活性,而且还具易操作、自适应性,能产生更高效能的边缘保持平滑与编辑传播效果。实验结果表明,“自适应边缘保持模型”的技术框架具有效性与实用性。
     第二、在人脸图像的皮肤美化研究方面,本文在“自适应边缘保持模型”的理论框架下,提出和构建了一种称之为“区域感知蒙板”的新型图像编辑工具。这种“区域感知蒙板”不仅能自动地实现皮肤区域选择与自动设置非均匀的局部编辑程度,而且能准确地拟合复杂的区域边界与产生自然的区域过渡。在此基础上,又构建出自适应人脸皮肤美化技术框架,并在图层增强中,提出了一种数据与知识双驱动的美化参数优化方法,实现了依平均脸与心理学先验而进行人脸美化组合参数的自动设定。通过对人脸美化组合参数与亮度、光滑度和颜色蒙板的协调、整合,有效地实现在一个统一的技术框架里,同时实现对人脸图像的亮度、光滑度和颜色三种重要皮肤属性的自动美化,大大地提高了人脸皮肤美化的效能及应用范围。实验结果表明,本文构建的模型和设计的技术框架可以同时处理不同人脸的光照、表情、性别、背景、年龄、姿态和种族等变化的图像,并能获得与商用系统(如PicTreat, Portrait+和Portraiture)相媲美甚至更优的人脸皮肤美化效果。
     第三、在人脸图像的光照迁移研究方面,本文在“自适应边缘保持模型”的理论框架下,构建了一个具有非均匀模型参数的自适应边缘保持平滑模型,实现了具非均匀特性的人脸区域内的光照模板生成与消色光照迁移。为进一步解决复杂背景下的光照迁移问题,本文又构建了一个具有自适应传播参数的编辑传播模型,有效地把人脸内部的光照信息平滑自然地扩散到其背景区域,并可同时迁移亮度、阴影和颜色信息等,从而实现复杂背景下的消色与彩色光照模板的生成以及相应的光照迁移效果。这种新型的图像编辑工具,具能直接根据人脸图像生成光照模板和实现光照迁移的效能,无需专用设备,使用方便;还具可以实现复杂背景下的彩色光照迁移,扩展基于图像的人脸真实感光照渲染技术的适用范围等优点。本文还通过运用Retinex理论和商图像理论,以及数学推导和演绎,论证了构建基于单张参考人脸图像模板生成的可行性,并提出了基于单张参考人脸图像的自适应光照模板的光照迁移技术框架。实验结果表明,自适应光照模板能够在具有不同表观特点的真实人脸图像、灰度人脸图像、非真实感人脸图像或手绘图像中获得良好的光照迁移效果,使光照迁移技术在渲染效果和适用范围上得到了有效的提升和扩展,具广泛的实用性。
     第四、在人脸图像的水墨画特色渲染研究方面,本文重点研究了水墨画特色渲染中的水墨画特色扩散模拟与不同水墨画风格生成的问题。本文提出了一种新的基于图像的水墨画特色扩散方法,通过对模型特征、模型参数与导向特征空间的设置,实现了具自适应性的不同抽象程度、扩散范围与扩散模式的水墨画渲染效果。在水墨画特色渲染的风格方面,通过构建一个新的水墨画特色渲染的技术框架,对图像抽象程度、水墨画特色扩散模式以及宣纸背景的颜色与不同特点的纹理进行组合,实现了具有不同的宣纸背景纹理特性、不同抽象程度、不同的水墨画特色扩散模式和不同风格的水墨画背景等不同风格的水墨画特色渲染效果,以及独特的非真实感人脸图像渲染效果。实验结果表明,本文的技术方法在不同物体或人脸中能产生良好的水墨画特色的渲染效果,而且与其他非真实感渲染方法相比具有一定的独特性。
Facial image beautification and rendering are two rapidly developing computationalphotography techniques, which involed with manipulation of attribute or content of an image(like the enhancement of faical skin lighting, smoothness and color), while the classic imageprocessing techniques aim to enhance the quality of an image. Using image-basedmanipulation techniques, a novel image is synthesized by samples captured from the realworld rather than recreating the entire physical world, which can enhance or extend thecapabilities of digital photography. The development of facial image beautification andrendering has led to many useful applications in our daily life (like post-production ofphotography or entertainment) and industry (like advertisement or movie production).However, existing methods of faical beautifcaiton and rendering may require tedious and timeconsuming hand-crafted operations. Furthermore, good visual effects are hard to produce byhand-crafted manipulation due to the limitations of human visual perception and skills.Therefore, it is fascinating to construct an automatic system for faical image beautificationand rendering.
     It is challenging to build an automatic system of faical image beautification andrendering. Variations of facial images are caused by many factors, such as illumination,viewpoint and background. Facial image beautification and rendering are involved withassorted mathematical models, but there is no mature unified framework to analyze the relatedmodels effectively. To produce an image in a natural manner, we may also take the visualperception principles of human into considerations for system construcation. This thesisdevelops an adaptive edge-preserving energy minimization model which can automaticallyadjust its model properties according the input images or the manpulation tasks. Using thismodel, we can analysize and construct novel edge-proserving smoothing or edit propagationmodels under a unified framework and devolep an automatic image manipulation system withgreat reliablity, accuracy, error tolerance, and stability. Based on the adaptive edge-preservingenergy minimization model, we explore the specific problems of facial skin beautification,faical relighting and ink-painting rendering, respectively. The contributions of the thesis are asfollows:
     First, we develop a general adaptive edge-preserving energy minimization framework toimprove performance of edge-preserving smoothing and edit propagation methods, and toachieve adaptive facial image beautification and rendering. A general edge-preserving energy minimization (GEEM) model is presented to understand the connections and properties of thebilateral filtering, anisotropic diffusion and weighted least squares filter using nonparametricpoint estimaton and calculus of variation. To overcome the shortages of the general GEEMmodel, an adaptive edge-preserving energy minimization (AEEM) model is proposed, whichhas adaptive fidelity term, model parameters and high-dimenstional guided feature space. TheAEEM model can derive a novel model with better edge-preserving smoothing or editpropagation effects, which further improve the performace of the specific automatic system offacial skin beautification, face relighting or ink-painting rendering.
     Second, we propose a novel image editing tool called adaptive region-aware mask andconstruct a unified framework for facial skin beautifcaion, which can enhance the skinlighting, smoothness and color automatically. A region-aware mask is generated based onAEEM, which is integrated with faical structure and apperace features, adaptive modelparameter and guided feature space construced by lighting and color feature. Using aregion-aware mask, we can automatically select the editing skin regions and perform anunhomogenenous local adjustment automatically with great precision, especially for the theregions with complex boundaries. The proposed skin beautification framework contains threemajor steps, image layers decomposition, region-aware mask generation and image layersmanipulation. Under this framework, a user can perform facial beautification simply byadjusting the skin parameters. Furthermore, the combinations of parameters can be optimizedautomatically, depending on the average face assumption and related psychologicalknowledge. We performed both qualitative and quantitative evaluation for our method usingfaces with different genders, races, ages, poses, and backgrounds from various databases. Theexperimental results demonstrate that our technique is superior to previous methods andcomparable to commercial systems, for example, PicTreat, Portrait+, and Portraiture.
     Third, we present a novel automatic lighting template generation method to relight faceswith complex backgound. Based on the principles of Retinex theory and quotient image, aface relighting framework with single reference image is presented, where the lightingtemplate is the key component. Face relighting within the skin region is performed using alighting template, which is generated by an adaptive edge-preserving smoothing modelderived from AEEM with adaptive smoothness parameter. To address the problem of religtingin complex background, the lighting within skin region is diffused to the background in asmooth manner using an edit propagation model derived from AEEM with adaptivepropagation paramenter.
     Fourth, we propose an image-based ink-painting rendering framework with a novel ink diffusion simulation method, which can mimic diverse ink painting styles. We construct aspecific edit propagation model derived from AEEM with edge detectors and guided featurespace to simulate ink diffusion. Different ink diffusion effects with different abstraction,diffusion scope, and diffusion patterns are obtained by adjusting the model feature, parametersand guided feature. The proposed ink-painting rendering framework, which consists of linefeature extraction, adaptive ink diffusion and absorbent paper background simulation, cangenerate distinctive ink painting styles by different combination of image abstraction, inkdiffusion patterns and absorbent paper background.
引文
[1]罗四维等编著,视觉信息认知计算理论[M].北京:科学出版社,2010.
    [2] Kumar N., Berg. A.C., Belhumeur P.N. et al. Describable visual attributes for faceverification and image search [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2011,33(10):1962-1977.
    [3] Edited by Lukac R. Computational Photography: Methods and Applications (DigitalImaging and Computer Vision)[M]. New York: CRC Press,2010.
    [4] Joshi N., Matusik W., Adelson E.H., et al. Personal photo enhancement using exampleimages [J]. ACM Trans. Graph,2010,29(2):1-15.
    [5] Kang S.B., Kapoor A. and Lischinski D. Personalization of image enhancement [C].IEEE Conference on Computer Vision and Pattern Recognition,2010:1799-1806.
    [6] Bychkovsky V., Paris S., Chan E., et al. Learning photographic global tonal adjustmentwith a database of input/output image pairs [C]. IEEE Conference on Computer Visionand Pattern Recognition,2011:97-104.
    [7] Zhang F.-L., Wang M. and Hu S.-M. Aesthetic Image Enhancement by DependenceAware Object Re-Composition [J]. IEEE Transactions on Multimedia,2013,15(7):1480-1490.
    [8] Eisenthal Y., Dror G. and Ruppin E. Facial attractiveness: Beauty and the machine [J].Neural Computation,2006,18(1):119-142.
    [9] Gunes H. and Piccardi M. Assessing facial beauty through proportion analysis by imageprocessing and supervised learning [J]. International journal of human-computer studies,2006.64(12):1184-1199.
    [10] Gray D., Yu K., Xu W., et al. Predicting facial beauty without landmarks [C]. Proc.ECCV2010:434-447.
    [11] Mario Rojas Q., Masip D. Todorov A., et al., Automatic point-based facial trait judgmentsevaluation [C]. Proc. CVPR,2010:2715-2720.
    [12] Fan J., Chau K., Wan X. et al., Prediction of facial attractiveness from facial proportions[J]. Pattern Recognition,2012,45(6):2326-2334.
    [13] Davis B., Lazebnik S. Analysis of human attractiveness using manifold kernel regression[C]. Proc. ICIP,2008:109-112.
    [14] Whitehill J. and Movellan J.R. Personalized facial attractiveness prediction [C].8th IEEEInternational Conference on Automatic Face&Gesture Recognition,2008:1-7.
    [15] Kagian A., Dror G., Leyvand T., et al. A Humanlike Predictor of Facial Attractiveness [C].Proc. NIPS,2006:649-656.
    [16] Chen Y., Mao H. and Jin L. A novel method for evaluating facial attractiveness [C].International Conference on Audio Language and Image Processing,2010:1382-1386.
    [17] Yan H. Cost-sensitive ordinal regression for fully automatic facial beauty assessment [J].Neurocomputing,2014,129:334–342.
    [18] Chang F. and Chou C.-H. A Bi-Prototype Theory of Facial Attractiveness [J]. NeuralComputation,21(3):890-910.
    [19] Sun M., Zhang D. and Yang J. Face attractiveness improvement using beauty prototypesand decision [C]. Proc. ACPR,2011:283-287.
    [20] Zhang D., Zhao Q. and Chen F. Quantitative analysis of human facial beauty usinggeometric features [J]. Pattern Recognition,2011,44(4):940-950.
    [21] Mao H., Jin L. and Du M. Automatic classification of Chinese female facial beauty usingSupport Vector Machine [C]. IEEE International Conference on Systems, Man andCybernetics,2009:4842-4846.
    [22] Komori M., Kawamura S. and Ishihara S. Averageness or symmetry: which is moreimportant for facial attractiveness?[J]. Acta psychologica,2009,131(2):136-142.
    [23] Fan J. Chau K.P., Wan X., et al. Prediction of facial attractiveness from facial proportions[J]. Pattern Recognition,2012,45(6):2326–2334.
    [24] Sun M., Zhang D. and Yang J. Face Attractiveness Improvement using Beauty Prototypesand Decision [C]. Proc. ACPR,2010:283-287.
    [25] Melacci S., Sarti L., Maggini M., et al. A template-based approach to automatic faceenhancement [J]. Pattern Analysis and Applications,2010,13(3):289-300.
    [26] Leyvand T., Cohen-Or D., Dror G., et al. Data-driven enhancement of facialattractiveness [J]. ACM Transactions on Graphics,2008,27(3):38:1-38:10.
    [27] Chen F. and Zhang D. A benchmark for geometric facial beauty study [C]. MedicalBiometrics,2010:21-32.
    [28] Lee C., Schramm M.T., Boutin M., et al. An algorithm for automatic skin smoothing indigital portraits [C]. Proceedings of the16th IEEE international conference on Imageprocessing,2009:3113-3116.
    [29] Chen C.-W., Huang D.-Y. and Fuh C.-S. Automatic Skin Color Beautification [C]. Artsand Technology,2010:157-164.
    [30] Florea C., Capata A., Ciuc M., et al. Facial enhancement and beautification for HD videocameras [C]. IEEE International Conference on Consumer Electronics,2011:741-742.
    [31] Liu H., Yan J., Li Z., et al. Portrait beautification: A fast and robust approach [J]. Imageand Vision Computing,2007,25(9):1404-1413.
    [32] Luxand, Inc., Alexandria, VA, USA. PicTreat.(2014, Mar.19)[Online]. Available:http://www.pictreat.com/.
    [33] ArcSoft, Fremont, CA, USA. PorTrait+.(2014, Mar.19)[Online]. Available:http://www.arcsoft.com/portraitplus/.
    [34] Imagenomic, Leesburg, VA, USA.Portraiture.(2014, Mar.19)[Online]. Available:http://imagenomic.com/pt.aspx.
    [35]美图公司,美图秀秀, http://xiuxiu.meitu.com/.
    [36]百度公司,百度魔图, http://motu.baidu.com/.
    [37] Tong W.-S., Tang C.-K., Brown M.S., et al. Example-based cosmetic transfer [C].15thPacific Conference on Computer Graphics and Applications,2007:211-218.
    [38] Guo D. and Sim T. Digital face makeup by example [C]. IEEE Conference on ComputerVision and Pattern Recognition,2009:73-79.
    [39] Scherbaum K., Ritschel T., Hullin M., et al. Computer-Suggested Facial Makeup [J].Computer Graphics Forum,2011:485-492.
    [40] Fu Y., Guo G. and Huang T.S. Age Synthesis and Estimation via Faces: A Survey [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(11):1955-1976.
    [41] Ramanathan N. and Chellappa R. Modeling age progression in young faces [C]. IEEEConference on Computer Vision and Pattern Recognition,2006:387-394.
    [42] Suo J., Zhu S.-C., Shan S., et al. A compositional and dynamic model for face aging [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(3):385-401.
    [43] Guo G., Digital anti-aging in face images [C]. IEEE International Conference onComputer Vision,2011:2510-2515.
    [44] Peers P., Tamura N., Matusik W., et al. Post-production facial performance relightingusing reflectance transfer [J]. ACM Transactions on Graphics,2007:52:1-52:10.
    [45] Chen J., Su G., He J., et al. Face image relighting using locally constrained globaloptimization [C]. Proc. ECCV,2010:44-57.
    [46] Li Q., Yin W. and Deng Z. Image-based face illumination transferring using logarithmictotal variation models [J]. The visual computer,2010,26(1):41-49.
    [47] Chen X., Chen M., Jin X., et al. Face illumination transfer through edge-preserving filters[C]. IEEE Conference on Computer Vision and Pattern Recognition,2011:281-287.
    [48] Chen X., Wu H., Jin X., et al. Face Illumination Manipulation Using a Single ReferenceImage by Adaptive Layer Decomposition [J]. IEEE Transactions on Image Processing,2013,22(11):4249-4259.
    [49] Jin X., Zhao M., Chen X., et al. Learning artistic lighting template from portraitphotographs [C]. Proc. ECCV,2010:101-114.
    [50] Chen X., Jin X., Zhao Q., et al. Artistic illumination transfer for portraits [J]. ComputerGraphics Forum,2012,31(4):1425-1434.
    [51] Wu H., Chen X., Yang M., et al. Facial performance illumination transfer from a singlevideo using interpolation in non-skin region [J]. Computer Animation and Virtual Worlds,2013,24(3-4):255-263.
    [52] Pérez P., Gangnet M. and Blake A. Poisson image editing [J]. ACM Transactions onGraphics,2003:313-318.
    [53] Agarwala A., Dontcheva M., Agrawala M., et al. Interactive digital photomontage [J].ACM Transactions on Graphics,2004:294-302.
    [54] Nguyen M.H., Lalonde J.-F., Efros A.A., et al. Image-based Shaving [J]. ComputerGraphics Forum.2008,27(2):627-635.
    [55] Bitouk D., Kumar N., Dhillon S., et al. Face swapping: automatically replacing faces inphotographs [J]. ACM Transactions on Graphics,2008:39.
    [56] Edited by Rosin P., and Collomosse J. Image and Video-Based Artistic Stylisation [M].London: Springer,2013.
    [57] Li Y. and Kobatake H. Extraction of facial sketch images and expression transformationbased on FACS [C]. International Conference on Image Processing,1995:520-523.
    [58] Li Y. and Kobatake H. Extraction of facial sketch image based on morphologicalprocessing [C]. International Conference on Image Processing,1997:316-319.
    [59] Chen H., Liu Z., Rose C., et al. Example-based composite sketching of human portraits[C]. Proceedings of the3rd international symposium on Non-photorealistic animation andrendering,2004:95-153.
    [60] Xu Z., Chen H., Zhu S.-C., et al. A hierarchical compositional model for facerepresentation and sketching [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2008.30(6):955-969.
    [61] Min F., Suo J.-L., Zhu S.-C., et al. An automatic portrait system based on and-or graphrepresentation [C], Proc. Computer Vision and Pattern Recognition,2007:184-197.
    [62] Wang X. and Tang X. Face photo-sketch synthesis and recognition [J]. IEEE Transactionson Pattern Analysis and Machine Intelligence,2009,31(11):1955-1967.
    [63] Berger I., Shamir A., Mahler M., et al. Style and abstraction in portrait sketching [J].ACM Transactions on Graphics,2013,32(4):55.
    [64] Zeng K., Zhao M., Xiong C., et al. From image parsing to painterly rendering [J]. ACMTrans. Graph,2009,29(1):2.
    [65] Zhao M. and Zhu S.-C. Portrait painting using active templates [C]. Proceedings of theACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation andRendering,2011:117-124.
    [66] Kang H., Lee S. and Chui C.K. Flow-based image abstraction [J]. IEEE Transactions onVisualization and Computer Graphics,2009,15(1):62-76.
    [67] Winnem ller H., Kyprianidis J.E. and Olsen S.C. XDoG: an extendeddifference-of-Gaussians compendium including advanced image stylization [J].Computers&Graphics,2012,36(6):740-753.
    [68] Gerstner T., DeCarlo D., Alexa M., et al. Pixelated image abstraction [C]. Proceedings ofthe Symposium on Non-Photorealistic Animation and Rendering,2012:29-36.
    [69] Mould D. A stained glass image filter [C]. Proceedings of the14th Eurographicsworkshop on Rendering,2003:20-25.
    [70] Winnem ller H., Olsen S.C., and Gooch B. Real-time video abstraction [J]. ACMTransactions on Graphics,2006,25(3):1221-1226.
    [71] Chu N.S. and Tai C.-L. Real-time painting with an expressive virtual Chinese brush [J].Computer Graphics and Applications,2004,24(5):76-85.
    [72] Chu N.S.-H. and Tai C.-L. MoXi: real-time ink dispersion in absorbent paper [J]. ACMTransactions on Graphics,2005:504-511.
    [73] Xie N., Laga H., Saito S., et al. IR2s: interactive real photo to Sumi-e [C]. Proceedings ofthe8th International Symposium on Non-Photorealistic Animation and Rendering,2010:63-71.
    [74] Yu J., Luo G. and Peng Q. Image-based synthesis of Chinese landscape painting [J].Journal of Computer Science and Technology,2003,18(1):22-28.
    [75] Cheok A.D., Lim Z.S. and Tan R.T.K. Humanistic Oriental art created using automatedcomputer processing and non-photorealistic rendering [J]. Computers&Graphics,2007,31(2):280-291.
    [76] Kunii T.L., Nosovskij G.V. and Hayashi T. A diffusion model for computer animation ofdiffuse ink painting [C]. Proceedings Computer Animation,1995:98-102.
    [77] Wang C.-M. and Wang R.-J. Image-based color ink diffusion rendering [J]. IEEETransactions on Visualization and Computer Graphics,2007,13(2):235-246.
    [78] Yang I., Yu Y. and Lee D. Ink-and-wash painting based on the image of pine tree usingmean curvature flow [C]. Proceedings of the11th ACM SIGGRAPH InternationalConference on Virtual-Reality Continuum and its Applications in Industry,2012:189-194.
    [79] Hjelm s E. and Low B.K. Face detection: A survey [J]. Computer vision and imageunderstanding,2001,83(3):236-274.
    [80] Yang M.-H., Kriegman D. and Ahuja N. Detecting faces in images: A survey [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2002,24(1):34-58.
    [81] Rowley H.A., Baluja S., Kanade T. Neural network-based face detection [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,1998,20(1): p.23-38.
    [82] Sung K.-K. and Poggio T. Example-based learning for view-based human face detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(1):39-51.
    [83] Zhang C. and Zhang Z., A survey of recent advances in face detection [R]. MicrosoftResearch TechReport,2010.
    [84] Belhumeur P.N., Jacobs D.W., Kriegman D., et al. Localizing parts of faces using aconsensus of exemplars [C]. IEEE Conference on Computer Vision and PatternRecognition,2011:545-552.
    [85] Cao X., Wei Y., Wen F., et al. Face alignment by explicit shape regression [C], IEEEConference on Computer Vision and Pattern Recognition,2012:2887-2894.
    [86] Zhu X. and Ramanan D. Face detection, pose estimation and landmark localization in thewild [C]. IEEE Conference on Computer Vision and Pattern Recognition,2012:2879-2886.
    [87] Sun Y., Wang X. and Tang X. Deep convolutional network cascade for facial pointdetection [C]. IEEE Conference on Computer Vision and Pattern Recognition,2013,3476-3483.
    [88] Zhou E., Fan H., Cao Z., et al. Extensive Facial Landmark Localization withCoarse-to-Fine Convolutional Network Cascade [C]. IEEE International Conference onComputer Vision Workshops,2013:386-391.
    [89] Huang G.B., Narayana M. and Learned-Miller E. Towards unconstrained face recognition[C]. IEEE Computer Society Conference on Computer Vision and Pattern RecognitionWorkshops,2008:1-8.
    [90] Lafferty J., McCallum A. and Pereira F.C. Conditional random fields: Probabilisticmodels for segmenting and labeling sequence data [C]. Proceedings of the EighteenthInternational Conference on Machine Learning,2001:282-289.
    [91] Wang N., Ai H. and Lao S. A compositional exemplar-based model for hair segmentation[C]. Proc. ACCV2010:171-184.
    [92] Ai H. and Tang F. What are good parts for hair shape modeling?[C]. IEEE Conference onComputer Vision and Pattern Recognition,2012:662-669.
    [93] Lee K.-C., Anguelov D., Sumengen B., et al. Markov random field models for hair andface segmentation [C]. International Conference on Automatic Face&GestureRecognition,2008:1-6.
    [94] Scheffler C., Odobez J.-M. and Marconi R. Joint adaptive colour modelling and skin, hairand clothing segmentation using coherent probabilistic index maps [C]. Proceedings ofthe British Machine Vision Conference,2011:53.1–53.11.
    [95] Kae A., Sohn K., Lee H., et al. Augmenting CRFs with Boltzmann Machine Shape Priorsfor Image Labeling [C]. IEEE Conference on Computer Vision and Pattern Recognition,2013:2019-2026.
    [96] Tomasi C. and Manduchi R. Bilateral filtering for gray and color images [C]. SixthInternational Conference on Computer Vision,1998:839-846.
    [97] Paris S., Kornprobst P., Tumblin J., et al. Bilateral filtering: Theory and applications [J].Foundations and Trends in Computer Graphics and Vision,2009,4(1):1-73.
    [98] Perona P. and Malik J. Scale-space and edge detection using anisotropic diffusion [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,12(7):629-639.
    [99] Niklas Nordstr m K. Biased anisotropic diffusion: a unified regularization and diffusionapproach to edge detection [J]. Image and Vision Computing,1990,8(4):318-327.
    [100] Farbman Z., Fattal R., Lischinski D., et al. Edge-preserving decompositions formulti-scale tone and detail manipulation [J]. ACM Transactions on Graphics,2008:67:1-67:10.
    [101] Lischinski D., Farbman Z., Uyttendaele M., et al. Interactive local adjustment of tonalvalues [J]. ACM Transactions on Graphics,2006,25(3):646-653.
    [102] An X. and Pellacini F. AppProp: all-pairs appearance-space edit propagation [J]. ACMTransactions on Graphics,2008,27(3):40.
    [103] Eisemann E. and Durand F. Flash photography enhancement via intrinsic relighting [J].ACM transactions on graphics,2004,23(3):673-678.
    [104] Petschnigg G., Szeliski R., Agrawala M., et al. Digital photography with flash andno-flash image pairs [J]. ACM transactions on graphics,2004:664-672.
    [105] Porikli F. Constant time O (1) bilateral filtering [C]. IEEE Conference on ComputerVision and Pattern Recognition,2008:1-8.
    [106] Yang Q., Tan K.-H. and Ahuja N. Real-time O (1) bilateral filtering [C]. IEEE Conferenceon Computer Vision and Pattern Recognition,2009:557-564.
    [107] Paris S. and Durand F. A fast approximation of the bilateral filter using a signalprocessing approach [C]. Proc. ECCV,2006:568-580.
    [108] Koenderink J.J. The structure of images [J]. Biological cybernetics,1984,50(5):363-370.
    [109] Hummel R.A. Representations based on zero-crossings in scale-space [C]. Proc. CVPR,1986:204-209.
    [110] Levin A., Lischinski D. and Weiss Y. Colorization using optimization [J]. ACMTransactions on Graphics,2004,23(3):689-694.
    [111] Li Y., Adelson E. and Agarwala A. ScribbleBoost: Adding Classification to Edge-AwareInterpolation of Local Image and Video Adjustments [J]. Computer Graphics Forum,2008:1255-1264.
    [112] Xu K., Li Y., Ju T., et al. Efficient affinity-based edit propagation using k-d tree [J]. ACMTransactions on Graphics,2009:118.
    [113] Chen X., Zou D., Zhao Q., et al. Manifold preserving edit propagation [J]. ACMTransactions on Graphics,2012,31(6):132.
    [114] Xiao C., Nie Y. and Tang F. Efficient edit propagation using hierarchical data structure [J].IEEE Transactions on Visualization and Computer Graphics,2011,17(8):1135-1147.
    [115] Ma L.-Q. and Xu K. Antialiasing recovery for edit propagation [C]. Proceedings of the10th International Conference on Virtual Reality Continuum and Its Applications inIndustry.2011:125-130.
    [116] Farbman Z., Fattal R. and Lischinski D. Diffusion maps for edge-aware image editing [J].ACM Transactions on Graphics,2010:145:1-145:10.
    [117] Bie X., Huang H. and Wang W., Real time edit propagation by efficient sampling [J].Computer Graphics Forum,2011:2041-2048.
    [118] Cohen Barry H.(美).心理统计学[M].高定国等译.上海:华东师范大学出版社,2010.
    [119] Thornhill R. and Gangestad S.W. Facial attractiveness [J]. Trends Cognit. Sci.,1999,3(12):452-460.
    [120] Perrett D.I., Burt D.M., Penton-Voak I.S., et al. Symmetry and human facialattractiveness [J]. Evolution and human behavior,1999,20(5):295-307.
    [121] Matts P.J., Fink B., Grammer K., et al. Color homogeneity and visual perception of age,health, and attractiveness of female facial skin [J]. Journal of the American Academy ofDermatology,2007,57(6):977-984.
    [122] Coetzee V., Faerber S.J., Greeff J.M., et al. African perceptions of female attractiveness[J]. PloS One,2012,7(10): e48116.
    [123] Stephen I.D., Scott I.M., Coetzee V., et al. Cross-cultural effects of color, but notmorphological masculinity, on perceived attractiveness of men's faces [J]. Evolution andHuman Behavior,2012,33(4):260-267.
    [124] Stephen I.D., Smith M.J.L., Stirrat M.R., et al. Facial skin coloration affects perceivedhealth of human faces [J]. International journal of primatology,2009,30(6):845-857.
    [125] Russell R. Sex, beauty, and the relative luminance of facial features [J]. Perception,2003.32(9).
    [126] He K., Sun J. and Tang X. Guided image filtering [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,2013,35(6):1397-1409.
    [127] Viola P. and Jones M.J. Robust Real-Time Face Detection [J]. International Journal ofComputer Vision,2004,57(2):137-154.
    [128] Lee S., Wolberg G., and Shin S.Y. Scattered data interpolation with multilevel B-splines[J]. IEEE Transactions on Visualization and Computer Graphics,1997,3(3):228-244.
    [129] Cootes T.F., Taylor C.J., Cooper D.H., et al. Active shape models-their training andapplication [J]. Computer vision and image understanding,1995,61(1):38-59.
    [130] Cootes T.F., Edwards G.J., Taylor C.J., et al. Active appearance models [J]. IEEETransactions on pattern analysis and machine intelligence,2001,23(6):681-685.
    [131] Canny J. A computational approach to edge detection [J]. IEEE Transactions on PatternAnalysis and Machine Intelligence,1986(6):679-698.
    [132] Qiao H., Zhang P., Wang D., et al. An explicit nonlinear mapping for manifold learning[J]. IEEE Transactions on Cybernetics,2013,43(1):51-63.
    [133] Nie F., Xu D. and Li X. Initialization independent clustering with actively self-trainingmethod [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,2012,42(1):17-27.
    [134] Weinberger K., Blitzer J. and Saul L. Distance metric learning for large margin nearestneighbor classification [J]. Advances in neural information processing systems,2006.18:p.1473.
    [135] Davis J.V., Kulis B., Jain P., et al. Information-theoretic metric learning [C]. Proceedingsof the24th international conference on Machine learning,2007:209-216.
    [136] Nadler B., Lafon S., Coifman R., et al. Diffusion maps-a probabilistic interpretation forspectral embedding and clustering algorithms [C]. Principal manifolds for datavisualization and dimension reduction,2008:238-260.
    [137] Caltech, Pasadena, CA, USA.(1999). Face Database [Online]. Available:http://www.vision.caltech.edu/html-files/.
    [138] Minear M., Park D.C. A lifespan database of adult facial stimuli [J]. Behavior ResearchMethods, Instruments,&Computers,2004,36(4):630-633.
    [139] FEI, Hillsboro, OR, USA.(2006). Face Database [Online]. Available:http://fei.edu.br/cet/facedatabase.html.
    [140] Land E.H. and McCann J. Lightness and retinex theory [J]. JOSA,1971,61(1):1-11.
    [141] Forsyth D.A. and Ponce J., Computer vision: a modern approach [M]. Second Edition.U.S.A: Pearson Education, Inc., publishing as Prentice Hall,2013.
    [142] Shashua A. and Riklin-Raviv T. The quotient image: class-based re-rendering andrecognition with varying illuminations [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2001,23(2):129-139.
    [143] Georghiades A.S., Belhumeur P.N. and Kriegman D. From few to many: Illuminationcone models for face recognition under variable lighting and pose [J]. IEEE Transactionson Pattern Analysis and Machine Intelligence,2001,23(6):643-660.
    [144] Ning X., Hachiya H. and Sugiyama M. Artist agent: A reinforcement learning approach toautomatic stroke generation in oriental ink painting [J]. IEICE Transactions onInformation and Systems,2013,96(5):1134-1144.
    [145] Efros A.A. and Freeman W.T. Image quilting for texture synthesis and transfer [C].Proceedings of the28th annual conference on Computer graphics and interactivetechniques.2001:341-346.
    [146] Gooch B., Reinhard E. and Gooch A. Human facial illustrations: Creation andpsychophysical evaluation [J]. ACM Transactions on Graphics,2004,23(1):27-44.
    [147] Milanfar P. A tour of modern image filtering: new insights and methods, both practicaland theoretical [J]. IEEE Signal Processing Magazine,2013.30(1):106-128.
    [148] Black M.J., Sapiro G., Marimont D.H., et al. Robust anisotropic diffusion [J]. IEEETransactions on Image Processing,1998,7(3):421-432.
    [149] Ye C., Tao D., Song M., et al. Sparse Norm Filtering [Z]. ArXiv Preprint,arXiv:1305.3971,2013.
    [150] Wand M.P. and Jones M.C., Kernel smoothing [M]. Great Britain: St Edmundsbury Press,1995.
    [151] Hardle W., Applied nonparametric regression [M]. Great Britain: Cambridge Univ Press,1990.
    [152] Briggs W.L., McCormick S.F. and McCormick S.F., A multigrid tutorial [M]. SecondEdition. U.S.A.: Siam,2000.
    [153] Saad Y., Iterative methods for sparse linear systems [M]. Second Edition. Siam,2003.
    [154] Szeliski R., Computer vision: algorithms and applications [M]. London: Springer,2011.
    [155] Szeliski R.(美),计算机视觉:算法与应用[M].艾海舟等译.北京:清华大学出版社,2012.
    [156] Galton F. Composite Portraits [J]. Journal of the Anthropological Institute of GreatBritain&Ireland,1879,8(2):132-14.
    [157] Hastie T., Tibshirani R. and Friedman J., The elements of statistical learning [M]. SecondEdition. Springer,2009.
    [158] Chan Tony F. and Shen J.,图像处理与分析:变分,PDE,小波及随机方法[M].陈文斌,程晋译.北京:科学出版社,2011.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700