Neural Collaborative Autoencoder
release_cl36ey5rercadothwhkhqa62aa
by
Qibing Li, Xiaolin Zheng, Xinyue Wu
2018
Abstract
In recent years, deep neural networks have yielded state-of-the-art
performance on several tasks. Although some recent works have focused on
combining deep learning with recommendation, we highlight three issues of
existing models. First, these models cannot work on both explicit and implicit
feedback, since the network structures are specially designed for one
particular case. Second, due to the difficulty on training deep neural
networks, existing explicit models do not fully exploit the expressive
potential of deep learning. Third, neural network models are easier to overfit
on the implicit setting than shallow models. To tackle these issues, we present
a generic recommender framework called Neural Collaborative Autoencoder (NCAE)
to perform collaborative filtering, which works well for both explicit feedback
and implicit feedback. NCAE can effectively capture the subtle hidden
relationships between interactions via a non-linear matrix factorization
process. To optimize the deep architecture of NCAE, we develop a three-stage
pre-training mechanism that combines supervised and unsupervised feature
learning. Moreover, to prevent overfitting on the implicit setting, we propose
an error reweighting module and a sparsity-aware data-augmentation strategy.
Extensive experiments on three real-world datasets demonstrate that NCAE can
significantly advance the state-of-the-art.
In text/plain
format
Archived Files and Locations
application/pdf 1.3 MB
file_vza6yl5pobemfeoripb3cknkxe
|
arxiv.org (repository) web.archive.org (webarchive) |
1712.09043v2
access all versions, variants, and formats of this works (eg, pre-prints)