Frozen Pretrained Transformers as Universal Computation Engines
release_qokbybk3lrgc7d4c4wmdizokky
by
Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch
2022 Volume 36, p7628-7636
Abstract
We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language can improve performance and compute efficiency on non-language downstream tasks. Additionally, we perform an analysis of the architecture, comparing the performance of a random initialized transformer to a random LSTM. Combining the two insights, we find language-pretrained transformers can obtain strong performance on a variety of non-language tasks.
In application/xml+jats
format
Archived Files and Locations
application/pdf 344.8 kB
file_accxsumtxjex7pgviuxkn4s3c4
|
ojs.aaai.org (publisher) web.archive.org (webarchive) |
article-journal
Stage
published
Date 2022-06-28
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar