文摘
Cross-media retrieval aims to automatically perform the content-based search procedure among various media types (e.g., image, video and text), in which media representation plays an important role for providing the heterogeneous similarity measure. In this work, a novel semantic representation of cross-media, called accumulated reconstruction error vector (AREV), is proposed, which includes category-specific dictionary learning, media sample reconstruction, and accumulative reconstruction error concatenation. Instead of directly learning the correlation relationship among heterogeneous items in the same semantic groups, the AREV projects individually their original feature descriptions into a shared semantic space, in which each component is semantic consistent for various media types due to the consistency in category information. Experiments on the commonly used datasets, i.e. Wikipedia dataset and NUS-Wide dataset, show the good performance in terms of effectiveness and efficiency.