Pretrained Transformers as Universal Computation Engines (Paper Explained)
CrossMind.ai logo

Pretrained Transformers as Universal Computation Engines (Paper Explained)

Mar 17, 2021
|
46 views
|
Details
Abstract: We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language improves performance and compute efficiency on non-language downstream tasks. In particular, we find that such pretraining enables FPT to generalize in zero-shot to these modalities, matching the performance of a transformer fully trained on these tasks. Authors: Kevin Lu, Aditya Grover, Pieter Abbeel, Igor Mordatch (UC Berkeley, Facebook AI Research, Google Brain)

0:00​ - Intro & Overview 2:00​ - Frozen Pretrained Transformers 4:50​ - Evaluated Tasks 10:05​ - The Importance of Training LayerNorm 17:10​ - Modality Transfer 25:10​ - Network Architecture Ablation 26:10​ - Evaluation of the Attention Mask 27:20​ - Are FPTs Overfitting or Underfitting? 28:20​ - Model Size Ablation 28:50​ - Is Initialization All You Need? 31:40​ - Full Model Training Overfits 32:15​ - Again the Importance of Training LayerNorm 33:10​ - Conclusions & Comments
Comments
loading...