Probing the Need for Visual Context in Multimodal Machine Translation

ACL 2019

Probing the Need for Visual Context in Multimodal Machine Translation

Jan 20, 2021
|
108 views
|
Details
Abstract: Current work on multimodal machine translation (MMT) has suggested that the visual modality is either unnecessary or only marginally beneficial. We posit that this is a consequence of the very simple, short and repetitive sentences used in the only available dataset for the task (Multi30K), rendering the source text sufficient as context. In the general case, however, we believe that it is possible to combine visual and textual information in order to ground translations. In this paper we probe the contribution of the visual modality to state-of-the-art MMT models by conducting a systematic analysis where we partially deprive the models from source-side textual context. Our results show that under limited textual context, models are capable of leveraging the visual input to generate better translations. This contradicts the current belief that MMT models disregard the visual modality because of either the quality of the image features or the way they are integrated into the model. Authors: Ozan Caglayan, Pranava Madhyastha, Lucia Specia, Loïc Barrault (Le Mans University, Imperial College London)

Comments
loading...