Recursive Neural Language Architecture for Tag Prediction release_m55vhpse4zb57g564wrdawxgoy

by Saurabh Kataria

Released as a article .

2016  

Abstract

We consider the problem of learning distributed representations for tags from their associated content for the task of tag recommendation. Considering tagging information is usually very sparse, effective learning from content and tag association is very crucial and challenging task. Recently, various neural representation learning models such as WSABIE and its variants show promising performance, mainly due to compact feature representations learned in a semantic space. However, their capacity is limited by a linear compositional approach for representing tags as sum of equal parts and hurt their performance. In this work, we propose a neural feedback relevance model for learning tag representations with weighted feature representations. Our experiments on two widely used datasets show significant improvement for quality of recommendations over various baselines.
In text/plain format

Archived Files and Locations

application/pdf  570.1 kB
file_cb677cea2fhibp3rvuyg5pcp2m
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2016-03-24
Version   v1
Language   en ?
arXiv  1603.07646v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a711ea4b-4bc0-422a-947b-6892d55dacba
API URL: JSON