Correlation Hashing Network for Efficient Cross-Modal Retrieval release_cwcag3bi2ndh3fxik6lagvpd5a

by Yue Cao, Mingsheng Long, Jianmin Wang, Philip S. Yu

Released as a article .

2016  

Abstract

Hashing is widely applied to approximate nearest neighbor search for large-scale multimodal retrieval with storage and computation efficiency. Cross-modal hashing improves the quality of hash coding by exploiting semantic correlations across different modalities. Existing cross-modal hashing methods first transform data into low-dimensional feature vectors, and then generate binary codes by another separate quantization step. However, suboptimal hash codes may be generated since the quantization error is not explicitly minimized and the feature representation is not jointly optimized with the binary codes. This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error. The proposed CHN is a hybrid deep architecture that constitutes a convolutional neural network for learning good image representations, a multilayer perception for learning good text representations, two hashing layers for generating compact binary codes, and a structured max-margin loss that integrates all things together to enable learning similarity-preserving and high-quality hash codes. Extensive empirical study shows that CHN yields state of the art cross-modal retrieval performance on standard benchmarks.
In text/plain format

Archived Files and Locations

application/pdf  1.4 MB
file_kutukdhuxjbpbpm7yo34w5f3v4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2016-02-22
Version   v1
Language   en ?
arXiv  1602.06697v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 540aa4e0-9d70-4b9a-a246-bdbb9a6ab356
API URL: JSON