用户名: 密码: 验证码:
媒体深层细粒度关联学习方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Cross-media Deep Fine-grained Correlation Learning
  • 作者:卓昀侃 ; 綦金玮 ; 彭宇新
  • 英文作者:ZHUO Yun-Kan;QI Jin-Wei;PENG Yu-Xin;Institute of Computer Science and Technology,Peking University;
  • 关键词:媒体检索 ; 5种媒体 ; 粒度信息挖掘 ; 媒体循环神经网络 ; 媒体联合关联约束
  • 英文关键词:cross-media retrieval;;quintuple-media;;fine-grained information mining;;cross-media recurrent neural network;;cross-media joint correlation constraint
  • 中文刊名:RJXB
  • 英文刊名:Journal of Software
  • 机构:北京大学计算机科学技术研究所;
  • 出版日期:2019-04-15
  • 出版单位:软件学报
  • 年:2019
  • 期:v.30
  • 基金:国家自然科学基金(61771025,61532005)~~
  • 语种:中文;
  • 页:RJXB201904003
  • 页数:12
  • CN:04
  • ISSN:11-2560/TP
  • 分类号:24-35
摘要
随着互联网与多媒体技术的迅猛发展,网络数据的呈现形式由单一文本扩展到包含图像、视频、文本、音频和3D模型等多种媒体,使得媒体检索成为信息检索的新趋势.然而,"异构鸿沟"问题导致不同媒体的数据表征不一致,难以直接进行相似性度量,因此,多种媒体之间的交叉检索面临着巨大挑战.随着深度学习的兴起,利用深度神经网络模型的非线性建模能力有望突破媒体信息表示的壁垒,但现有基于深度学习的媒体检索方法一般仅考虑图像和文本两种媒体数据之间的成对关联,难以实现更多种媒体的交叉检索.针对上述问题,提出了媒体深层细粒度关联学习方法,支持多达5种媒体类型数据(图像、视频、文本、音频和3D模型)的交叉检索.首先,提出了媒体循环神经网络,通过联合建模多达5种媒体类型数据的细粒度信息,充分挖掘不同媒体内部的细节信息以及上下文关联.然后,提出了媒体联合关联损失函数,通过将分布对齐和语义对齐相结合,更加准确地挖掘媒体内和媒体间的细粒度跨媒体关联,同时利用语义类别信息增强关联学习过程的语义辨识能力,提高媒体检索的准确率.在两个包含5种媒体的媒体数据集PKU XMedia和PKU XMediaNet上与现有方法进行实验对比,实验结果表明了所提方法的有效性.
        With the rapid development of the Internet and multimedia technology, data on the Internet is expanded from only text to image, video, text, audio, 3D model, and other media types, which makes cross-media retrieval become a new trend of information retrieval. However, the "heterogeneity gap" leads to inconsistent representations of different media types, and it is hard to measure the similarity between the data of any two kinds of media, which makes it quite challenging to realize cross-media retrieval across multiple media types. With the recent advances of deep learning, it is hopeful to break the boundaries between different media types with the strong learning ability of deep neural network. But most existing deep learning based methods mainly focus on the pairwise correlation between two media types as image and text, and it is difficult to extend them to multi-media scenario. To address the above problem, Deep Fine-grained Correlation Learning(DFCL) approach is proposed, which can support cross-media retrieval with up to five media types(image, video, text, audio, and 3D model). First, cross-media recurrent neural network is proposed to jointly model the fine-grained information of up to five media types, which can fully exploit the internal details and context information of different media types. Second,cross-media joint correlation loss is proposed, which combines distribution alignment and semantic alignment to exploit both intra-media and inter-media fine-grained correlation, while it can further enhance the semantic discrimination capability by semantic category information, aiming to promote the accuracy of cross-media retrieval effectively. Extensive experiments on 2 cross-media datasets are conducted, namely PKU XMedia and PKU XMediaNet datasets, which contain up to five media types. The experimental results verify the effectiveness of the proposed approach.
引文
[1]Lew MS,Sebe N,Djeraba C,Jain R.Content-based multimedia information retrieval:State of the art and challenges.ACM Trans.Multimedia Computing,Communication,and Applications(TOMMCCAP),2006,2(1):1-19.
    [2]Peng YX,Huang X,Zhao YZ.An overview of crossmedia retrieval:Concepts,methodologies,benchmarks and challenges.IEEETrans.on Circuits and Systems for Video Technology(TCSVT),2018,28(5):2372-2385.
    [3]Zhuang Y,Zhuang YT,Wu F.An integrated indexing structure for large-scale cross-media retrieval.Ruan Jian Xue Bao/Journal of Software,2008,19(10):2667-2680(in Chinese with English abstract).http://www.jos.org.cn/1000-9825/2667.htm[doi:10.3724/SP.J.1001.2008.02667]
    [4]Wu F,Zhuang YT.Cross media analysis and retrieval on the Web:Theory and algorithm.Journal of Computer-Aided Design&Computer Graphics,2010,22(1):1-9(in Chinese with Emglish absrtact).
    [5]Hu Y,Xie X.Coherent phrase model for efficient image near-duplicate retrieval.IEEE Trans.on Multimedia(TMM),2009,11(8):1434-1445.
    [6]Peng YX,Ngo CW.Clip-based similarity measure for query-dependent clip retrieval and video summarization.IEEE Trans.on Circuits and Systems for Video Technology(TCSVT),2006,16(5):612-627.
    [7]McGurk H,MacDonald J.Hearing lips and seeing voices.Nature,1976,264(5588):746-748.
    [8]Feng F,Wang X,Li R.Cross-modal retrieval with correspondence autoencoder.In:Proc.of the ACM Int’l Conf.on Multimedia(ACM-MM).2014.7-16.
    [9]Kang C,Xiang S,Liao S,Xu C,Pan C.Learning consistent feature representation for cross-modal multimedia retrieval.IEEETrans.on Multimedia(TMM),2015,17(3):370-381.
    [10]Zhai XH,Peng YX,Xiao J.Learning cross-media joint representation with sparse and semi-supervised regularization.IEEE Trans.on Circuits and Systems for Video Technology(TCSVT),2014,24(6):965-978.
    [11]Hotelling H.Relations between two sets of variates.Biometrika,1936,321-377.
    [12]Ranjan V,Rasiwasia N,Jawahar CV.Multi-label cross-modal retrieval.In:Proc.of the IEEE Int’l Conf.on Computer Vision(ICCV).2015.4094-4102.
    [13]Ding MY,Niu YL,Lu ZW,Wen JR.Deep learning for parameter recognition in commodity images.Ruan Jian Xue Bao/Journal of Software,2018,29(4):1039-1048(in Chinese with English abstract).http://www.jos.org.cn/1000-9825/5408.htm[doi:10.13328/j.cnki.jos.005408]
    [14]Bai Z,Huang L,Chen JN,Pan X,Chen SY.Optimization of deep convolutional neural network for large scale image classification.Ruan Jian Xue Bao/Journal of Software,2018,29(4):1029-1038(in Chinese with English abstract).http://www.jos.org.cn/1000-9825/5404.htm[doi:10.13328/j.cnki.jos.005404]
    [15]Peng YX,Huang X,Qi JW.Cross-media shared representation by hierarchical learning with multiple deep networks.In:Proc.of the Int’l Joint Conf.on Artificial Intelligence(IJCAI).2016.3846-3853.
    [16]Yan F,Mikolajczyk K.Deep correlation for matching images and text.In:Proc.of the Conf.on Computer Vision and Pattern Recognition(CVPR).2015.3441-3450.
    [17]Hardoon DR,Szedmak S,Shawe-Taylor J.Canonical correlation analysis:An overview with application to learning methods.Neural Computation,2004,16(12):2639-2664.
    [18]Li D,Dimitrova N,Li M,Sethi IK.Multimedia content processing through cross-modal association.In:Proc.of the ACM Int’l Conf.on Multimedia(ACM-MM).2003.604-611.
    [19]Krizhevsky A,Sutskever I,Hinton GE.Imagenet classification with deep convolutional neural networks.In:Advances in Neural Information Processing Systems(NIPS).2012.1106-1114.
    [20]He KM,Zhang X,Ren S,Sun J.Deep residual learning for image recognition.In:Proc.of the Conf.on Computer Vision and Pattern Recognition(CVPR).2016.770-778.
    [21]Wu Z,Jiang Y,Wang X,Ye H,Xue X.Multi-stream multiclass fusion of deep networks for video classification.In:Proc.of the ACM Int’l Conf.on Multimedia(ACM-MM).2016.791-800.
    [22]Andrew G,Arora R,Bilmes J.Deep canonical correlation analysis.In:Proc.of the Int’l Conf.on Machine Learning(ICML).2013.1247-1255.
    [23]Wei Y,Zhao Y,Lu C,Wei S,Liu L,Zhu Z,Yan S.Cross-modal retrieval with CNN visual features:A new baseline.IEEE Trans.on Cybernetics(TCYB),2017,47(2):449-460.
    [24]Peng YX,Qi JW,Huang X,Yuan YX.CCL:Cross-modal correlation learning with multi-grained fusion by hierarchical network.IEEE Trans.on Multimedia(TMM),2017.
    [25]Huang X,Peng YX,Yuan MK.Cross-modal common representation learning by hybrid transfer network.In:Proc.of the Int’l Joint Conf.on Artificial Intelligence(IJCAI).2017.1893-1900.
    [26]Wang BK,Yang Y,Xu X,Hanjalic A,Shen HT.Adversarial cross-modal retrieval.In:Proc.of the ACM Conf.on Multimedia(ACM-MM).2017.154-162.
    [27]Zhai XH,Peng YX,Xiao J.Heterogeneous metric learning with joint graph regularization for cross-media retrieval.In:Proc.of the AAAI Conf.on Artificial Intelligence(AAAI).2013.1198-1204.
    [28]Peng YX,Zhai XH,Zhao YZ,Huang X.Semi-supervised crossmedia feature learning with unified patch graph regularization.IEEETrans.on Circuits and Systems for Video Technology(TCSVT),2016,26(3):583-596.
    [29]Hochreiter S,Schmidhuber J.Long short-term memory.Neural Computation,1997,9(8):1735-1780.
    [30]Mikolov T,Sutskever I,Chen K,Corrado GS,Dean J.Distributed representations of words and phrases and their compositionality.In:Advances in Neural Information Processing Systems(NIPS).2013.3111-3119.
    [31]Gretton A,Borgwardt KM,Rasch MJ,Sch?lkopf B,Smola A.A kernel two-sample test.Journal of Machine Learning Research(JMLR),2012,13(1):723-773.
    [32]Simonyan K,Zisserman A.Very deep convolutional networks for large-scale image recognition.In:Proc.of the Int’l Conf.on Learning Representations(ICLR).2014.
    [33]Kim Y.Convolutional neural networks for sentence classification.In:Proc.of the Conf.on Empirical Methods in Natural Language Processing(EMNLP).2014.1746-1751.
    [34]Chen D,Tian X,Shen Y,Ouhyoung M.On visual similarity based 3D model retrieval.Computer Graphics Forum,2003,22(3):223-232.
    [3]庄毅,庄越挺,吴飞.一种支持海量媒体检索的集成索引结构.软件学报,2008,19(10):2667-2680.http://www.jos.org.cn/1000-9825/2667.htm[doi:10.3724/SP.J.1001.2008.02667]
    [4]吴飞,庄越挺.互联网媒体分析与检索:理论与算法.计算机辅助设计与图形学学报,2010,22(1):1-9.
    [13]丁明宇,牛玉磊,卢志武,文继荣.基于深度学习的图片中商品参数识别方法.软件学报,2018,29(4):1039-1048.http://www.jos.org.cn/1000-9825/5408.htm[doi:10.13328/j.cnki.jos.005408]
    [14]白琮,黄玲,陈佳楠,潘翔,陈胜勇.面向大规模图像分类的深度卷积神经网络优化.软件学报,2018,29(4):1029-1038.http://www.jos.org.cn/1000-9825/5404.htm[doi:10.13328/j.cnki.jos.005404]

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700