摘要
深度学习是人工智能近年来的新进展,其对计算的新需求驱动新的计算架构.本文首先通过分析人工智能的阶段和任务指出深度学习的需求实质,然后从3个方面讨论深度学习领域专用架构,分别是计算结构的评价标准、数字计算的数制基础和深度学习计算架构的研究方向.本文首次提出使用K-L距离(Kullback-Leibler divergence)来评价深度学习结构的复杂度和准确度.本文认为以Posit数制为基础,不仅可以重新构造深度学习的计算架构,而且可以重新构造科学计算的计算架构,形成计算芯片设计的后发优势.最后全文总结认为深度学习驱动的领域专用架构将是计算架构创新的重要组成部分.
Deep learning(DL) is one of the most exciting progresses in the ?eld of arti?cial intelligence(AI);moreover, its new computational demands are driving new architecture researches. This paper ?rstly points out DL requirement essence by analyzing the stage and tasks in AI development, then discusses DL domain-speci?c architectures(DSAs) from three perspectives, which are the criteria of computational structures, the basics of a numerical system for computation, and DL DSA potential research directions. Furthermore, herein, the KullbackLeibler divergence was utilized as the criteria for DL computation architecture complexity and accuracy. Besides,Posit was employed as a new number system to rebuild DL computation and scienti?c computation and to establish the late-development advantage of digital chips. Finally, it was concluded that DL DSAs are one of the critical DSA research areas.
引文
1 Ma L W.Intel Corporation.Method and apparatus for a binary neural network mapping scheme utilizing a gate array architecture.PCT/CN2016/112721.https://patentscope2.wipo.int/search/en/detail.jsf?docId=WO2018119785
2 Gustafson J,Yonemoto I.Beating floating point at its own game:posit arithmetic.J Supercomput Front Innov,2017,4:71-86
3 Lindstrom P,Lloyd S,Hittinger J.Universal coding of the reals:alternatives to IEEE floating point.In:Proceedings of the Conference for Next Generation Arithmetic.New York:ACM,2018
4 Langroudi S H F,Pandit T,Kudithipudi D.Deep learning inference on embedded devices:fixed-point vs posit.2018.ArXiv:1805.08624
5 Johnson J.Rethinking floating point for deep learning.2018.ArXiv:1811.01721
1) https://en.wikipedia.org/wiki/Kullback-Leibler divergence. 2018.