RETRO: Relation Retrofitting For In-Database Machine Learning on Textual
Data
release_t4vm333r45d5vll6psatbbo3y4
by
Michael Günther, Maik Thiele, Wolfgang Lehner
2019
Abstract
There are massive amounts of textual data residing in databases, valuable for
many machine learning (ML) tasks. Since ML techniques depend on numerical input
representations, word embeddings are increasingly utilized to convert symbolic
representations such as text into meaningful numbers. However, a naive
one-to-one mapping of each word in a database to a word embedding vector is not
sufficient and would lead to poor accuracies in ML tasks. Thus, we argue to
additionally incorporate the information given by the database schema into the
embedding, e.g. which words appear in the same column or are related to each
other. In this paper, we propose RETRO (RElational reTROfitting), a novel
approach to learn numerical representations of text values in databases,
capturing the best of both worlds, the rich information encoded by word
embeddings and the relational information encoded by database tables. We
formulate relation retrofitting as a learning problem and present an efficient
algorithm solving it. We investigate the impact of various hyperparameters on
the learning problem and derive good settings for all of them. Our evaluation
shows that the proposed embeddings are ready-to-use for many ML tasks such as
classification and regression and even outperform state-of-the-art techniques
in integration tasks such as null value imputation and link prediction.
In text/plain
format
Archived Files and Locations
application/pdf 3.7 MB
file_budim5h3djh2hk6dsf2bwrbuda
|
arxiv.org (repository) web.archive.org (webarchive) |
1911.12674v1
access all versions, variants, and formats of this works (eg, pre-prints)