Multi-modal Transformer for Video Retrieval release_r2duzkctwvbffetignqkmxjoka

by Valentin Gabeur, Chen Sun, Karteek Alahari, Cordelia Schmid

Released as a article .

2020  

Abstract

The task of retrieving video content relevant to natural language queries plays a critical role in effectively handling internet-scale datasets. Most of the existing methods for this caption-to-video retrieval problem do not fully exploit cross-modal cues present in video. Furthermore, they aggregate per-frame visual features with limited or no temporal information. In this paper, we present a multi-modal transformer to jointly encode the different modalities in video, which allows each of them to attend to the others. The transformer architecture is also leveraged to encode and model the temporal information. On the natural language side, we investigate the best practices to jointly optimize the language embedding together with the multi-modal transformer. This novel framework allows us to establish state-of-the-art results for video retrieval on three datasets. More details are available at http://thoth.inrialpes.fr/research/MMT.
In text/plain format

Archived Files and Locations

application/pdf  1.7 MB
file_c6ntqsou7fh4taov7ge2e6ttoa
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-07-21
Version   v1
Language   en ?
arXiv  2007.10639v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: c375a7d0-5ba5-4a31-8a60-0e6c4c37a4cf
API URL: JSON