Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(LlogL), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
Authors: Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya (UC Berkeley & Google Brain)
This paper was accepted by ICLR 2020.
About the speaker
Łukasz joined Google in 2013 and is currently a senior Research Scientist in the Google Brain Team in Mountain View, where he works on fundamental aspects of deep learning and natural language processing. He has co-designed state-of-the-art neural models for machine translation, parsing and other algorithmic and generative tasks and co-authored the TensorFlow system, the Tensor2Tensor library and the Transformer model. Before joining Google, Łukasz was a tenured researcher at University Paris Diderot and worked on logic and automata theory. He received his PhD from RWTH Aachen University in 2008 and his MSc from the University of Wroclaw, Poland.