EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing

ACL 2019

EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing

Jan 31, 2021
|
37 views
|
|
Code
Details
Abstract: We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify sentences as a byproduct of the fact that they are trained on complex-simple sentence pairs. By contrast, our neural programmer-interpreter is directly trained to predict explicit edit operations on targeted parts of the input sentence, resembling the way that humans perform simplification and revision. Our model outperforms previous state-of-the-art neural sentence simplification models (without external knowledge) by large margins on three benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89 WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better and simpler output sentences. Authors: Yue Dong, Zichao Li, Mehdi Rezagholizadeh, Jackie Chi Kit Cheung (McGill, Huawei Noah’s Ark Lab)

Comments
loading...