摘要
随着神经网络的运用日益广泛,安全问题越来越受重视。对抗样本的出现更是使得攻击神经网络变得轻而易举,该文以神经网络的安全性问题为出发点,介绍了对抗样本的概念和形成原因,总结了生成对抗样本的常用算法,以及防御对抗样本的常用方法,最后提出了下一步保障神经网络安全性的发力点应该从测试转向安全验证。
With the wide application of neural network, safety problem is paid more and more attention. It's easy to attack the neural network after the emergence of adversarial examples. This paper based on security problems as a starting point, introduces the concept and formation of adversarial examples, summarizes the commonly used algorithms of generating adversarial examples and the common methods of defense. Finally, this paper puts forward that the next step to ensure the safety of neural network should be safety verification.
引文
[1]Szegedy C.Intriguing properties of neural networks[J].Computer Science,2013.
[2]http://tech.ifeng.com/a/20171209/44797474_0.shtml[EB/OL].
[3]Goodfellow I J.Explaining and harnessing adversarial examples[C].ICML,2015:1-10.
[4]Papernot N.Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks[C].IEEE,2016:582-597.
[5]Lu J.SafetyNet:Detecting and Rejecting Adversarial Examples Robustly[C].IEEE,2017:446-454.
[6]Katz G.Reluplex:An Efficient SMT Solver for Verifying Deep Neural Networks[J].CAV,2017:97-117.