基于具有深度门的多模态长短期记忆网络的说话人识别
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Speaker Identification Based on Multimodal Long Short-Term Memory with Depth-Gate
  • 作者:陈湟康 ; 陈莹
  • 英文作者:Chen Huangkang;Chen Ying;Key Laboratory of Advanced Process Control for Light Industry of the Education Ministry of China,Jiangnan University;
  • 关键词:图像处理 ; 说话人识别 ; 长短期记忆网络 ; 融合 ; 深度门 ; 权重共享
  • 英文关键词:image processing;;speaker recognition;;long short-term memory network;;fusion;;depth-gate;;weight sharing
  • 中文刊名:JGDJ
  • 英文刊名:Laser & Optoelectronics Progress
  • 机构:江南大学轻工过程先进控制教育部重点实验室;
  • 出版日期:2018-09-07 11:00
  • 出版单位:激光与光电子学进展
  • 年:2019
  • 期:v.56;No.638
  • 基金:国家自然科学基金(61573168)
  • 语种:中文;
  • 页:JGDJ201903016
  • 页数:7
  • CN:03
  • ISSN:31-1690/TN
  • 分类号:130-136
摘要
为了在说话人识别任务中有效融合音视频特征,提出一种基于深度门的多模态长短期记忆(LSTM)网络。首先对每一类单独的特征建立一个多层LSTM模型,并通过深度门连接上下层的记忆存储单元,增强上下层的联系,提升该特征本身的分类性能。同时,通过在不同模型之间共享连接隐藏层输出与各个门单元的权重,学习每一层模型之间的联系。实验结果表明,该方法能有效融合音视频特征,提高说话人识别的准确率,并且对干扰具有一定的稳健性。
        In order to effectively fuse the audio and visual features in the task of speaker recognition,a multimodal long short-term memory network(LSTM)with depth-gate is proposed.First,a multi-layer LSTM model is established for each type of individual features.Then the depth-gate is used to connect the memory cells in the upper and lower layers,and the connection between the upper and lower layers is enhanced,which improves the classification performance of the feature itself.At the same time,the connection among layer models can be learned by sharing the output of hidden layers and the weight of each gate unit among different models.The experimental results show that this method can be used to effectively fuse the audio and video features and improve the accuracy of speaker recognition.Moreover,this method is robust to external disturbance.
引文
[1]Kanagasundaram A,Vogt R,Dean D,et al.I-vectorbased speaker recognition on short utterances[C]∥Proceedings of the 12th Annual Conference of theInternational Speech Communication Association(ISCA),2011:2341-2344.
    [2]Matějka P,Glembek O,Castaldo F,et al.Full-covariance UBM and heavy-tailed PLDA in I-vectorspeaker verification[C]∥2011IEEE InternationalConference on Acoustics, Speech and SignalProcessing(ICASSP),2011:4828-4831.
    [3]Alam M R,Bennamoun M,Togneri R,et al.Aconfidence-based late fusion framework for audio-visual biometric identification[J]. PatternRecognition Letters,2015,52:65-71.
    [4]Wu Z Y,Cai L H.Audio-visual bimodal speakeridentification using dynamic bayesian networks[J].Journal of Computer Research and Development,2006,43(3):470-475.吴志勇,蔡莲红.基于动态贝叶斯网络的音视频双模态说话人识别[J].计算机研究与发展,2006,43(3):470-475.
    [5]Hu Y T,Ren J S,Dai J W,et al.Deep multimodalspeaker naming[C]∥Proceedings of the 23rd ACMInternational Conference on Multimedia-MM′15,2015:1107-1110.
    [6]Geng J J,Liu X,Cheung Y M.Audio-visual speakerrecognition via multi-modal correlated neuralnetworks[C]∥2016IEEE/WIC/ACM InternationalConference on Web Intelligence Workshops(WIW),2016:123-128.
    [7]Wen M F, Hu C, Liu W R. Heterogeneousmultimodal object recognition method based on deeplearning[J].Journal of Central South University(Science and Technology),2016,47(5):1580-1587.文孟飞,胡超,刘伟荣.一种基于深度学习的异构多模态目标识别方法[J].中南大学学报(自然科学版),2016,47(5):1580-1587.
    [8]Ren J,Hu Y,Tai Y W,et al.Look,listen andlearn-a multimodal LSTM for speaker identification[C].AAAI,2016:3581-3587.
    [9]Yao K,Cohn T,Vylomova K,et al.Depth-gatedrecurrent neural networks[J].arXiv:1508.03790,2015.
    [10]Hochreiter S, Schmidhuber J. Longshort-termmemory[J].Neural Computation,1997,9(8):1735-1780.
    [11]Mikolov T,Karafi T M,Burget L,et al.Recurrentneural network based language model[C]∥Proceedings of the 11th Annual Conference of theInternational Speech Communication Association(ISCA),2010:1045-1048.
    [12]Sutskever I, Vinyals O,Le Q V.Sequence tosequence learning with neural networks[J].arXiv:1409.3215v3,2014.
    [13]Kalchbrenner N,Danihelka I,Graves A.Grid longshort-term memory[J].arXiv:1507.01526,2015.
    [14]Hinton G E,Srivastava N,Krizhevsky A,et al.Improving neural networks by preventing co-adaptation of feature detectors[J]. ComputerScience,2012,3(4):212-223.
    [15]Li Y J,Huang J J,Wang H Y,et al.Study ofemotion recognition based on fusion multi-modal bio-signal with SAE and LSTM recurrent neural network[J].Journal on Communications,2017,38(12):109-120.李幼军,黄佳进,王海渊,等.基于SAE和LSTMRNN的多模态生理信号融合和情感识别研究[J].通信学报,2017,38(12):109-120.
    [16]Liu Y H,Liu X,Fan W T,et al.Efficient audio-visual speaker recognition via deep heterogeneousfeature fusion[C]∥Chinese Conference on BiometricRecognition.Springer,Cham,2017:575-583.
    [17]Azab M, Wang M Z,Smith M,et al.Speakernaming in movies[C]∥Proceedings of the 2018Conference of the North American Chapter of theAssociation for Computational Linguistics:HumanLanguage Technologies,Volume 1(Long Papers),New Orleans,Louisiana,2018:2206-2216.
    [18]Yang H X, Chen Y, Zhang F,et al. Facerecognition based on improved gradient local binarypattern[J]. Laser &Optoelectronics Progress,2018,55(6):061004.杨恢先,陈永,张翡,等.基于改进梯度局部二值模式的人脸识别[J].激光与光电子学进展,2018,55(6):061004.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700