Frozen Pretrained Transformers as Universal Computation Engines release_qokbybk3lrgc7d4c4wmdizokky

by Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch

Published in PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE by Association for the Advancement of Artificial Intelligence (AAAI).

2022   Volume 36, p7628-7636

Abstract

We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language can improve performance and compute efficiency on non-language downstream tasks. Additionally, we perform an analysis of the architecture, comparing the performance of a random initialized transformer to a random LSTM. Combining the two insights, we find language-pretrained transformers can obtain strong performance on a variety of non-language tasks.
In application/xml+jats format

Archived Files and Locations

application/pdf  344.8 kB
file_accxsumtxjex7pgviuxkn4s3c4
ojs.aaai.org (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2022-06-28
Proceedings Metadata
Not in DOAJ
Not in Keepers Registry
ISSN-L:  2159-5399
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: fb1af62c-fe3a-46a4-bd0e-5a4958d49ef9
API URL: JSON