一种基于深度稀疏自编码的语音情感迁移学习方法(英文)
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Transfer learning with deep sparse auto-encoder for speech emotion recognition
  • 作者:梁镇麟 ; 梁瑞宇 ; 唐曼婷 ; 谢跃 ; 赵力 ; 王诗佳
  • 英文作者:Liang Zhenlin;Liang Ruiyu;Tang Manting;Xie Yue;Zhao Li;Wang Shijia;School of Information Science and Engineering, Southeast University;School of Communication Engineering, Nanjing Institute of Technology;School of Computer Engineering, Jinling Institute of Technology;
  • 关键词:稀疏自编码器 ; 迁移学习 ; 语音情感识别
  • 英文关键词:sparse auto-encoder;;transfer learning;;speech emotion recognition
  • 中文刊名:DNDY
  • 英文刊名:东南大学学报(英文版)
  • 机构:东南大学信息科学工程学院;南京工程学院通信工程学院;金陵科技学院计算机工程学院;
  • 出版日期:2019-06-15
  • 出版单位:Journal of Southeast University(English Edition)
  • 年:2019
  • 期:v.35
  • 基金:The National Natural Science Foundation of China(No.61871213,61673108,61571106);; Six Talent Peaks Project in Jiangsu Province(No.2016-DZXX-023)
  • 语种:英文;
  • 页:DNDY201902003
  • 页数:8
  • CN:02
  • ISSN:32-1325/N
  • 分类号:17-24
摘要
为了提高跨语料库的语音情感识别效率,提出了一种基于深度稀疏自编码的语音情感迁移学习方法.算法首先通过训练深度稀疏自编码器来对目标域中的少量数据进行重建,使得编码器可以学习到目标域数据低维度的结构表征.然后,将源域数据和目标域数据通过训练好的深度稀疏自编码器,得到靠近目标域低维度的结构表征的重建数据.最后,利用部分重建的含标签的目标域数据与重建的源域数据混合后共同训练分类器,以便完成对源域数据的引导.在CASIA、SoutheastLab语料库上的实验表明,通过少量数据迁移后的模型识别率在DNN上达到了89.2%和72.4%.和完整原始语料库训练的结果相比,在CASIA上仅下降了2%,在SoutheastLab上仅下降了3.4%.实验说明,该算法能够在数据集只有少量数据有标签的极端情况下,达到逼近于所有数据都有标签的效果.
        In order to improve the efficiency of speech emotion recognition across corpora, a speech emotion transfer learning method based on the deep sparse auto-encoder is proposed. The algorithm first reconstructs a small amount of data in the target domain by training the deep sparse auto-encoder, so that the encoder can learn the low-dimensional structural representation of the target domain data. Then, the source domain data and the target domain data are coded by the trained deep sparse auto-encoder to obtain the reconstruction data of the low-dimensional structural representation close to the target domain. Finally, a part of the reconstructed tagged target domain data is mixed with the reconstructed source domain data to jointly train the classifier. This part of the target domain data is used to guide the source domain data. Experiments on the CASIA, SoutheastLab corpus show that the model recognition rate after a small amount of data transferred reached 89.2% and 72.4% on the DNN. Compared to the training results of the complete original corpus, it only decreased by 2% in the CASIA corpus,and only 3.4% in the SoutheastLab corpus. Experiments show that the algorithm can achieve the effect of labeling all data in the extreme case that the data set has only a small amount of data tagged.
引文
[1] Schuller B,Vlasenko B,Eyben F,et al.Cross-corpus acoustic emotion recognition:Variances and strategies[J].IEEE Transactions on Affective Computing,2010,1(2):119-131.DOI:10.1109/t-affc.2010.8.
    [2] Lim H,Kim M J,Kim H.Cross-acoustic transfer learning for sound event classification[C]// IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP).Shanghai,China,2016:16021470.
    [3] Torrey L,Shavlik J.Transfer learning[M]//Handbook of Research on Machine Learning Applications and Trends:Algorithms,Methods,and Techniques.IGI Global,2010:242-264.DOI:10.4018/978-1-60566-766-9.ch011.
    [4] Deng J,Zhang Z X,Marchi E,et al.Sparse autoencoder-based feature transfer learning for speech emotion recognition[C]// 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.Geneva,Switzerland,2013:511-516.
    [5] Latif S,Rana R,Younis S,et al.Cross corpus speech emotion classification—An effective transfer learning technique[EB/OL].(2018-01-22)[2018-11-20].https://www.researchgate.net/publication/322634480_Cross_Corpus_Speech_Emotion_Classification_-_An_Effective_Transfer_Learning_Technique.
    [6] Zong Y,Zheng W M,Zhang T,et al.Cross-corpus speech emotion recognition based on domain-adaptive least-squares regression[J].IEEE Signal Processing Letters,2016,23(5):585-589.DOI:10.1109/lsp.2016.2537926.
    [7] Song P,Zheng W M.Feature selection based transfer subspace learning for speech emotion recognition[J].IEEE Transactions on Affective Computing,2018:1.DOI:10.1109/taffc.2018.2800046.
    [8] Xu J,Xiang L,Liu Q S,et al.Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images[J].IEEE Transactions on Medical Imaging,2016,35(1):119-130.DOI:10.1109/tmi.2015.2458702.
    [9] Sarath C A P,Lauly S,Larochelle H,et al.An autoencoder approach to learning bilingual word representations[C]//International Conference on Neural Information Processing Systems.Kuching,Malaysia,2014:1853-1861.
    [10] Goodfellow I J,Le Q V,Saxe A M,et al.Measuring invariances in deep networks[C]// International Conference on Neural Information Processing Systems.Bangkok,Thailand,2009:646-654.
    [11] Mairal J,Bach F,Ponce J.Online learning for matrix factorization and sparse coding[J].Journal of Machine Learning Research,2009,11(1):19-60.
    [12] HintonG E.Reducing the dimensionality of data with neural networks[J].Science,2006,313(5786):504-507.DOI:10.1126/science.1127647.
    [13] Pan S F,Tao J H,Li Y.The CASIA audio emotion recognition method for audio/visual emotion challenge 2011[C]// Proceedings of the Fourth International Conference on Affective Computing and Intelligent Interaction.Memphis,TN,USA,2011:388-395.
    [14] Eyben F,W?llmer M,Schuller B.openSMILE—The Munich versatile and fast open-source audio feature extractor[C]//ACM International Conference on Multimedia.Firenze,Italia,2010:1459-1462.
    [15] Larochelle H,Bengio Y,Louradour J,et al.Exploring Strategies for training deep neural networks[J].Journal of Machine Learning Research,2009,1(10):1-40.
    [16] Bengio Y,Lamblin P,Dan P,et al.Greedy layer-wise training of deep networks[J].Advances in Neural Information Processing Systems,2007,19(2007):153-160.
    [17] Hinton G E.Deep belief networks[J].Scholarpedia,2009,4(5):5947.DOI:10.4249/scholarpedia.5947.
    [18] Xu B,Wang N,Chen T,et al.Empirical evaluation of rectified activations in convolutional network[EB/OL].(2015-11-27)[2018-11-20].http://de.arxiv.org/pdf/1505.00853.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700