Efficient multi-modal hypergraph learning for social image classification with complex label correlations
详细信息    查看全文
文摘
Multi-label and multi-modality are two dramatic characteristics of social images. Multi-labels illustrate the co-occurrence of objects in an image; while multimodal features represent the image from different viewpoints. They describe social images from two different aspects. However, it is of considerable challenge to integrate multimodal features and multi-labels simultaneously for social images classification. In this paper, we propose a hypergraph learning algorithm to integrate multi-modal features and multi-label correlation seamlessly. More specifically, we first propose a new feature fusion strategy by integrating multi-modal features into a unified hypergraph. An efficient multimodal hypergraph (EMHG) is constructed to solve the high computational complexity problem of the proposed fusion scheme. Secondly, we construct a multi-label correlation hypergraph (LCHG) to model the complex associations among labels. Moreover, an adaptive learning algorithm is adopted to learn the label scores and hyperedge weights simultaneously with the combination of the two hypergraphs. Experiments conducted on real-world social image datasets demonstrate the superiority of our proposed method compared with representative transductive baselines.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700