[KDD 2020] Unsupervised Paraphrasing via Deep Reinforcement Learning - CrossMinds.ai
[KDD 2020] Unsupervised Paraphrasing via Deep Reinforcement Learning
Aug 13, 20209 views
Muhammad Abu Bakar Siddique
Paraphrasing is expressing the meaning of an input sentence in,different wording while maintaining fluency (i.e., grammatical and,syntactical correctness). Most existing work on paraphrasing use,supervised models that are limited to specific domains (e.g., image,captions). Such models can neither be straightforwardly transferred,to other domains nor generalize well, and creating labeled training,data for new domains is expensive and laborious. The need for,paraphrasing across different domains and the scarcity of labeled,training data in many such domains call for exploring unsupervised paraphrase generation methods. We propose Progressive Unsupervised Paraphrasing (PUP): a novel unsupervised paraphrase,generation method based on deep reinforcement learning (DRL).,PUP uses a variational autoencoder (trained using a non-parallel,corpus) to generate a seed paraphrase that warm-starts the DRL,model. Then, PUP progressively tunes the seed paraphrase guided,by our novel reward function which combines semantic adequacy,,language fluency, and expression diversity measures to quantify,the quality of the generated paraphrases in each iteration without,needing parallel sentences. Our extensive experimental evaluation,shows that PUP outperforms unsupervised state-of-the-art paraphrasing techniques in terms of both automatic metrics and user,studies on four real datasets. We also show that PUP outperforms,domain-adapted supervised algorithms on several datasets. Our,evaluation also shows that PUP achieves a great trade-off between,semantic similarity and diversity of expression.
SIGKDD_2020
Recommended