Simple and Effective Multi-Paragraph Reading Comprehension

ACL 2018

Details
Abstract: We introduce a method of adapting neural paragraph-level question answering models to the case where entire doc-uments are given as input. Most current question answering models cannot scale to document or multi-document input, and naively applying these models to each paragraph independently often results in them being distracted by irrelevant text. We show that it is possible to significantly improve performance by using a modified training scheme that teaches the model to ignore non-answer containing paragraphs. Our method involves sampling multiple para-graphs from each document, and using an objective function that requires the model to produce globally correct output. We additionally identify and improve upon a number of other design decisions that arise when working with document-level data. Experiments on TriviaQA and SQuAD shows our method advances the state of the art, including a 10 point gain on TriviaQA. Authors: Christopher Clark, Matt Gardner (University of Washington)

Comments
loading...