基于对抗样本的神经网络安全性问题研究综述
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:A Research on Safety of Neural Networks Based on Adversarial Examples
  • 作者:李紫珊
  • 英文作者:LI Zi-shan;Software School of Tongji University;
  • 关键词:安全性 ; 神经网络 ; 对抗样本 ; 攻击与防御 ; 安全验证
  • 英文关键词:safety;;neural network;;adversarial examples;;attack and defense;;safety verification
  • 中文刊名:DNZS
  • 英文刊名:Computer Knowledge and Technology
  • 机构:同济大学软件学院;
  • 出版日期:2019-01-25
  • 出版单位:电脑知识与技术
  • 年:2019
  • 期:v.15
  • 语种:中文;
  • 页:DNZS201903083
  • 页数:2
  • CN:03
  • ISSN:34-1205/TP
  • 分类号:192-193
摘要
随着神经网络的运用日益广泛,安全问题越来越受重视。对抗样本的出现更是使得攻击神经网络变得轻而易举,该文以神经网络的安全性问题为出发点,介绍了对抗样本的概念和形成原因,总结了生成对抗样本的常用算法,以及防御对抗样本的常用方法,最后提出了下一步保障神经网络安全性的发力点应该从测试转向安全验证
        With the wide application of neural network, safety problem is paid more and more attention. It's easy to attack the neural network after the emergence of adversarial examples. This paper based on security problems as a starting point, introduces the concept and formation of adversarial examples, summarizes the commonly used algorithms of generating adversarial examples and the common methods of defense. Finally, this paper puts forward that the next step to ensure the safety of neural network should be safety verification.
引文
[1]Szegedy C.Intriguing properties of neural networks[J].Computer Science,2013.
    [2]http://tech.ifeng.com/a/20171209/44797474_0.shtml[EB/OL].
    [3]Goodfellow I J.Explaining and harnessing adversarial examples[C].ICML,2015:1-10.
    [4]Papernot N.Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks[C].IEEE,2016:582-597.
    [5]Lu J.SafetyNet:Detecting and Rejecting Adversarial Examples Robustly[C].IEEE,2017:446-454.
    [6]Katz G.Reluplex:An Efficient SMT Solver for Verifying Deep Neural Networks[J].CAV,2017:97-117.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700