BERTERS: Multimodal Representation Learning for Expert Recommendation System with Transformer release_pknabu6xqrfqndyp6dsmkd2gye

by N. Nikzad-Khasmakhi, M. A. Balafar, M.Reza Feizi-Derakhshi, Cina Motamed

Released as a article .

2020  

Abstract

The objective of an expert recommendation system is to trace a set of candidates' expertise and preferences, recognize their expertise patterns, and identify experts. In this paper, we introduce a multimodal classification approach for expert recommendation system (BERTERS). In our proposed system, the modalities are derived from text (articles published by candidates) and graph (their co-author connections) information. BERTERS converts text into a vector using the Bidirectional Encoder Representations from Transformer (BERT). Also, a graph Representation technique called ExEm is used to extract the features of candidates from the co-author network. Final representation of a candidate is the concatenation of these vectors and other features. Eventually, a classifier is built on the concatenation of features. This multimodal approach can be used in both the academic community and the community question answering. To verify the effectiveness of BERTERS, we analyze its performance on multi-label classification and visualization tasks.
In text/plain format

Archived Files and Locations

application/pdf  1.2 MB
file_bb6genvinjh55le2sxg2mi4ocm
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-06-30
Version   v1
Language   en ?
arXiv  2007.07229v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: f3cdbb49-1d2b-4d32-a2be-d33adc19f009
API URL: JSON