Stolen Probability: A Structural Weakness of Neural Language Models

ACL 2020

Stolen Probability: A Structural Weakness of Neural Language Models

Jul 05, 2020
|
36 views
|
Details
Neural Network Language Models (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. The dot-product distance metric forms part of the inductive bias of NNLMs. Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when assigning probability. We present numerical, theoretical and empirical analyses showing that words on the interior of the convex hull in the embedding space have their probability bounded by the probabilities of the words on the hull. Speakers: David Demeter, Gregory Kimmel, Doug Downey

Comments
loading...