Contextual Embeddings: When are they worth it? (ACL 2020) - CrossMinds.ai
Contextual Embeddings: When are they worth it? (ACL 2020)
Aug 26, 202024 views
HazyResearch
Contextual embeddings have revolutionized NLP, but are highly computationally expensive. In this work we focus on the question of when contextual embeddings are worth their cost, versus when it is possible to use more efficient word representations without significant degradation in performance. In particular, we study the settings for which deep contextual embeddings give large improvements in performance relative to two much more computationally efficient baselines, classic embeddings and random embeddings, focusing on the impact of the training set size and the linguistic properties of the task. 

Surprisingly, we find that both of these simpler baselines can match the performance contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training.

Contact: simarora@stanford.edu 
Paper: https://arxiv.org/abs/2005.09117 
Code: https://github.com/HazyResearch/random_embedding
ACL 2020
Recommended