Transformers in computational visual media: A survey release_fszqbepk3jg2jbeb33mn7cjgim

by Yifan Xu, Huapeng Wei, Minxuan Lin, Yingying Deng, Kekai Sheng, Mengdan Zhang, Fan Tang, Weiming Dong, Feiyue Huang, Changsheng Xu

Published in Computational Visual Media by Springer Science and Business Media LLC.

2021   p33-62

Abstract

<jats:title>Abstract</jats:title>Transformers, the dominant architecture for natural language processing, have also recently attracted much attention from computational visual media researchers due to their capacity for long-range representation and high performance. Transformers are sequence-to-sequence models, which use a self-attention mechanism rather than the RNN sequential structure. Thus, such models can be trained in parallel and can represent global information. This study comprehensively surveys recent visual transformer works. We categorize them according to task scenario: backbone design, high-level vision, low-level vision and generation, and multimodal learning. Their key ideas are also analyzed. Differing from previous surveys, we mainly focus on visual transformer methods in low-level vision and generation. The latest works on backbone design are also reviewed in detail. For ease of understanding, we precisely describe the main contributions of the latest works in the form of tables. As well as giving quantitative comparisons, we also present image results for low-level vision and generation tasks. Computational costs and source code links for various important works are also given in this survey to assist further development.
In application/xml+jats format

Archived Files and Locations

application/pdf  5.5 MB
file_pvndr2ca6rbcrhel4lknredliy
link.springer.com (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2021-10-27
Language   en ?
Container Metadata
Open Access Publication
In DOAJ
In ISSN ROAD
In Keepers Registry
ISSN-L:  2096-0433
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: ebf0e231-e081-4817-8323-f67c2c27a608
API URL: JSON