Representing Text for Joint Embedding of Text and Knowledge Bases

EMNLP 2015

Representing Text for Joint Embedding of Text and Knowledge Bases

Apr 18, 2016
|
70 views
Details
Abstract: Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (Riedel et al., 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model significantly improves performance over a model that does not share parameters among textual relations with common sub-structure. Authors: Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, Michael Gamon (Microsoft Research, Stanford University)

Comments
loading...