Probabilistic FastText for Multi-Sense Word Embeddings

ACL 2018

Probabilistic FastText for Multi-Sense Word Embeddings

Jan 27, 2021
|
35 views
|
|
Code
Details
Abstract: We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, subword structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share the “strength” across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FastText, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-of-art performance on benchmarks that measure ability to discern different meanings. Thus, our model is the first to achieve best of both the worlds: multi-sense representations while having enriched semantics on rare words. Authors: Ben Athiwaratkun, Andrew Gordon Wilson, Anima Anandkumar (Cornell University)

Comments
loading...