基于多模态判别性嵌入空间的图像情感分析
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Image Sentiment Analysis with Multimodal Discriminative Embedding Space
  • 作者:吕光瑞 ; 蔡国永 ; 林煜明
  • 英文作者:Lü Guang-rui;CAI Guo-yong;LIN Yu-ming;Guangxi Key Laboratory of Trusted Software,Guilin University of Electronic Technology;
  • 关键词:情感分析 ; 潜在关联 ; 线性判别 ; 多模态网络 ; 注意力机制
  • 英文关键词:sentiment analysis;;latent correlation;;linear discrimination;;multimodal network;;attention mechanism
  • 中文刊名:BJYD
  • 英文刊名:Journal of Beijing University of Posts and Telecommunications
  • 机构:桂林电子科技大学广西可信软件重点实验室;
  • 出版日期:2019-03-19 10:31
  • 出版单位:北京邮电大学学报
  • 年:2019
  • 期:v.42
  • 基金:国家自然科学基金项目(61763007,61562014);; 广西自然科学基金项目(2017JJD160017);; 广西可信软件重点实验室项目(kx201503)
  • 语种:中文;
  • 页:BJYD201901009
  • 页数:7
  • CN:01
  • ISSN:11-3570/TN
  • 分类号:65-71
摘要
为了解决图像情感分析中存在的情感鸿沟和大的类内方差问题,提出了一种可以同时利用视觉模态和文本模态之间的深度潜在关联、视觉模态的深度线性判别和图像中层语义融合的弱监督方法.利用多模态深度网络结构找到一个视觉模态和文本模态之间最大深度关联且视觉模态具有深度判别性的潜在嵌入空间,并在该潜在空间中将文本的语义映射特征迁移到图像的判别性视觉映射特征中;结合注意力机制,设计涵盖潜在空间中映射特征的注意力网络,用于情感分类.在真实数据集上的实验结果表明,所提出的方法获得了更好的情感分类准确率.
        In order to alleviate affective gap and large intra-class variance existing in visual sentiment analysis,firstly a new method is proposed,which exploits simultaneously not only deep latent correlations between visual and textual modalities,but also deep linear discrimination of visual modality and weak supervision of mid-level semantic features of images. The method uses multimodal deep network architecture to find a latent embedding space in which deep correlations between visual and textual modalities are maximized,and at the same time there is a deep discrimination on visual modality. In the latent space,the extracted semantic feature of texts can be transferred to the extracted discriminant visual feature of images. Secondly based on the usfulness of attention mechanism,an attention network is presented,which accepts the extracted features in the latent space as input and is trained as a sentiment classifier. Results of experiments conducted on real datasets show that the proposed approach achieves better sentiment classification accuracy than those state-of-the-art approaches.
引文
[1]Weiss K,Khoshgoftaar T M,Wang D D. A survey of transfer learning[J]. Journal of Big Data,2016,3(1):9.
    [2]Borth D,Ji R,Chen T,et al. Large-scale visual sentiment ontology and detectors using adjective noun pairs[C]∥ACM International Conference on Multimedia.New York:ACM,2013:223-232.
    [3]Jou B,Chen T,Pappas N,et al. Visual affect around the world:A large-scale multilingual visual sentiment ontology[C]∥ACM International Conference on Multimedia. New York:ACM,2015:159-168.
    [4]李钊,卢苇,邢薇薇,等. CNN视觉特征的图像检索[J].北京邮电大学学报,2015,38(s1):103-106.Li Zhao,Lu Wei,Xing Weiwei,et al. Image retrieval based on CNN visual features[J]. Journal of Beijing University of Posts and Telecommunications,2015,38(s1):103-106.
    [5]You Q,Yang J,Yang J,et al. Robust image sentiment analysis using progressively trained and domain transferred deep networks[C]∥29thAAAI Conference on Artificial Intelligence. Menlo Park:AAAI,2015:381-388.
    [6]Campos V,Jou B,Giro-i-Nieto X. From pixels to sentiment:fine-tuning CNNs for visual sentiment prediction[J]. Image and Vision Computing,2017(65):15-22.
    [7]Islam J,Zhang Y. Visual sentiment analysis for social images using transfer learning approach[C]∥IEEE International Conferences on Big Data and Cloud Computing. Piscataway:IEEE,2016:124-130.
    [8]Andrew G,Arora R,Bilmes J,et al. Deep canonical correlation analysis[C]∥International Conference on Machine Learning. Atlanta:ICML,2013:1247-1255.
    [9]Dorfer M,Kelz R,Widmer G,et al. Deep linear discriminant analysis[C]∥International Conference on Learning Representations. San Juan:ICLR,2016:1-13.
    [10]Katsurai M,Satoh S. Image sentiment analysis using latent correlations among visual,textual,and sentiment views[C]∥IEEE International Conference on Acoustics,Speech and Signal Processing. Piscataway:IEEE,2016:2837-2841.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700