Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning

ACL 2018

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning

Jan 28, 2021
|
37 views
|
Details
Abstract: Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost—The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-the-art approaches focus on selecting one-best sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are subop-timal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft atten-tion weights. To do this, our paper describes a radical solution—We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the neg-ative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. Authors: Pengda Qin, Weiran Xu, William Yang Wang (Beijing University of Posts and Telecommunications, University of California, Santa Barbara)

Comments
loading...