对抗样本生成在人脸识别中的研究与应用
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:RESEARCH AND APPLICATION OF ADVERSARIAL SAMPLE GENERATION IN FACIAL RECOGNITION
  • 作者:张加胜 ; 刘建明 ; 韩磊 ; 纪飞 ; 刘煌
  • 英文作者:Zhang Jiasheng;Liu Jianming;Han Lei;Ji Fei;Liu Huang;School of Computer Science and Information Security, Guilin University of Electronic Technology;
  • 关键词:深度学习 ; 黑盒攻击 ; 脆弱性 ; 生成对抗网络 ; 眼镜贴片
  • 英文关键词:Deep learning;;Black-Box attack;;Vulnerability;;Generative adversarial network(GAN);;Eyeglass patches
  • 中文刊名:JYRJ
  • 英文刊名:Computer Applications and Software
  • 机构:桂林电子科技大学计算机与信息安全学院;
  • 出版日期:2019-05-12
  • 出版单位:计算机应用与软件
  • 年:2019
  • 期:v.36
  • 语种:中文;
  • 页:JYRJ201905028
  • 页数:7
  • CN:05
  • ISSN:31-1260/TP
  • 分类号:164-170
摘要
随着深度学习模型在人脸识别、无人驾驶等安全敏感性任务中的广泛应用,围绕深度学习模型展开的攻防逐渐成为机器学习和安全领域研究的热点。黑盒攻击作为典型的攻击类型,在不知模型具体结构、参数、使用的数据集等情况下仍能进行有效攻击,是真实背景下最常用的攻击方法。随着社会对人脸识别技术的依赖越来越强,在安全性高的场合里部署神经网络,往往容易忽略其脆弱性带来的安全威胁。充分分析深度学习模型存在的脆弱性并运用生成对抗网络,设计一种新颖的光亮眼镜贴片样本,能够成功欺骗基于卷积神经网络的人脸识别系统。实验结果表明,基于生成对抗网络生成的对抗眼镜贴片样本能够成功攻击人脸识别系统,性能优于传统的优化方法。
        Deep learning(DL) models have been widely applied into security-sensitivity tasks, such as facial recognition, automated driving, etc. Attacks and defenses associated with the DL have gradually become hot spots in the field of machine learning and security. The black box attack, as a typical attack type and the most common attack method in the real context, can still perform effective attacks without knowing the specific structure and parameters of the model, including data sets. With the increasing dependence on facial recognition technology, it is easy to ignore the security threats caused by its vulnerability when deploying neural networks in high security situations. This paper fully analyzed the vulnerability of the deep learning model and used the generated adversarial network(GAN) to design a novel bright glasses patch sample, which could successfully deceive the facial recognition system based on convolutional neural network. The experimental results show that the adversarial eyeglass patches generated by GAN can successfully attack the face recognition system, and the performance is better than the traditional optimization methods.
引文
[1] Papernot N,Mcdaniel P,Sinha A,et al.Towards the Science of Security and Privacy in Machine Learning[EB].arXiv:1611.03814,2016.
    [2] Barreno M,Nelson B,Joseph A D,et al.The security of machine learning[J].Machine Learning,2010,81(2):121-148.
    [3] Papernot N,Mcdaniel P,Jha S,et al.The Limitations of Deep Learning in Adversarial Settings[C]//2016 IEEE European Symposium on Security and Privacy(EuroS&P).IEEE,2016:372-387.
    [4] Feinman R,Curtin R R,Shintre S,et al.Detecting Adversarial Samples from Artifacts[EB].arXiv:1703.00410,2017.
    [5] Kurakin A,Goodfellow I,Bengio S.Adversarial examples in the physical world[EB].arXiv:1607.02533,2016.
    [6] Dong Y,Liao F,Pang T,et al.Boosting Adversarial Attacks with Momentum[EB].arXiv:1710.06081,2017.
    [7] Tramèr F,Kurakin A,Papernot N,et al.Ensemble Adversarial Training:Attacks and Defenses[EB].arXiv:1705.07204,2017.
    [8] Carlini N,Wagner D.Towards Evaluating the Robustness of Neural Networks[C]//2017 IEEE Symposium on Security and Privacy(SP).IEEE,2017:39-57.
    [9] Papernot N,Mcdaniel P,Goodfellow I.Transferability in Machine Learning:from Phenomena to Black-Box Attacks using Adversarial Samples[EB].arXiv:1605.07277,2016.
    [10] Szegedy C,Zaremba W,Sutskever I,et al.Intriguing properties of neural networks[EB].arXiv:1312.6199,2013.
    [11] Tramèr F,Papernot N,Goodfellow I,et al.The Space of Transferable Adversarial Examples[EB].arXiv:1704.03453,2017.
    [12] Liu Y,Chen X,Liu C,et al.Delving into Transferable Adversarial Examples and Black-box Attacks[EB].arXiv:1611.02770,2016.
    [13] Goodfellow I J,Shlens J,Szegedy C.Explaining and Harnessing Adversarial Examples[EB].arXiv:1412.6572,2014.
    [14] Kurakin A,Goodfellow I,Bengio S.Adversarial Machine Learning at Scale[EB].arXiv:1611.01236,2016.
    [15] Boer P T D,Kroese D P,Mannor S,et al.A Tutorial on the Cross-Entropy Method[J].Annals of Operations Research,2005,134(1):19-67.
    [16] Meng D,Chen H.Magnet:a two-pronged defense against adversarial examples[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.ACM,2017:135-147.
    [17] Parkhi O M,Vedaldi A,Zisserman A.Deep Face Recognition[C]//British Machine Vision Conference 2015.2015.
    [18] Huang G B,Ramesh M,Berg T,et al.Labeled faces in the wild:A database for studying face recognition in unconstrained environments[R].Technical Report 07-49,University of Massachusetts,Amherst,October 2007.
    [19] Amos B,Ludwiczuk B,Satyanarayanan M.OpenFace:A general-purpose face recognition library with mobile applications[R].Technical report,CMU-CS-16-118,CMU School of Computer Science,2016.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700