Abstract: Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Early works per-formed this task via simple models developed over KG triples. Recent attempts focused on either designing more com-plicated triple scoring models, or incorporating extra information beyond triples. This paper, by contrast, investigates the potential of using very simple constraints to improve KG embedding. We examine non-negativity constraints on entity representations and approximate entailment constraints on relation representations. The former help to learn compact and interpretable representations for entities. The latter further encode regularities of logical entailment between relations into their distributed representations. These constraints impose prior beliefs upon the structure of the embedding space, without negative impacts on efﬁciency or scalability. Evaluation on WordNet, Freebase, and DBpedia shows that our approach is simple yet surprisingly effective, signiﬁcantly and consistently outperforming competitive baselines. The constraints imposed indeed improve model interpretability, leading to a substantially in-creased structuring of the embedding space.
Authors: Boyang Ding, Quan Wang, Bin Wang, Li Guo (Chinese Academy of Sciences)