[ECCV2020 Oral] TextCaps: a Dataset for Image Captioning with Reading Comprehension

ECCV 2020

Authors: Oleksii Sidorov - Facebook AI Research Ronghang Hu - University of California, Berkeley / Facebook AI Research Marcus Rohrbach - Facebook AI Research Amanpreet Singh - Facebook AI Research Abstract: Image descriptions can help visually impaired people to quickly understand the image content. While we made significant progress in automatically describing images and optical character recognition, current approaches are unable to include written text in their descriptions, although text is omnipresent in human environments and frequently critical to understand our surroundings. To study how to comprehend text in the context of an image we collect a novel dataset, TextCaps, with 145k captions for 28k images. Our dataset challenges a model to recognize text, relate it to its visual context, and decide what part of the text to copy or paraphrase, requiring spatial, semantic, and visual reasoning between multiple text tokens and visual entities, such as objects. We study baselines and adapt existing approaches to this new task, which we refer to as image captioning with reading comprehension. Our analysis with automatic and human studies shows that our new TextCaps dataset provides many new technical challenges over previous datasets. Paper: https://arxiv.org/pdf/2003.12462.pdf Project page: https://textvqa.org/textcaps