Correlation Hashing Network for Efficient Cross-Modal Retrieval release_wemcafkkojbvppdkclyj5knglq

by Yue Cao, Mingsheng Long, Jianmin Wang, Philip S. Yu

Released as a article .

2017  

Abstract

Hashing is widely applied to approximate nearest neighbor search for large-scale multimodal retrieval with storage and computation efficiency. Cross-modal hashing improves the quality of hash coding by exploiting semantic correlations across different modalities. Existing cross-modal hashing methods first transform data into low-dimensional feature vectors, and then generate binary codes by another separate quantization step. However, suboptimal hash codes may be generated since the quantization error is not explicitly minimized and the feature representation is not jointly optimized with the binary codes. This paper presents a Correlation Hashing Network (CHN) approach to cross-modal hashing, which jointly learns good data representation tailored to hash coding and formally controls the quantization error. The proposed CHN is a hybrid deep architecture that constitutes a convolutional neural network for learning good image representations, a multilayer perception for learning good text representations, two hashing layers for generating compact binary codes, and a structured max-margin loss that integrates all things together to enable learning similarity-preserving and high-quality hash codes. Extensive empirical study shows that CHN yields state of the art cross-modal retrieval performance on standard benchmarks.
In text/plain format

Archived Files and Locations

application/pdf  1.7 MB
file_wnyo42ufnvc7ld4yldbkwxlpvi
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2017-02-20
Version   v2
Language   en ?
arXiv  1602.06697v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: f47ad48c-ffbb-4c65-901e-8bbd086de1db
API URL: JSON