Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation

ACL 2019

Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation

Feb 02, 2021
|
30 views
|
Details
Abstract: To improve low-resource Neural Machine Translation (NMT) with multilingual corpus, training on the most related high-resource language only is generally more effective than us- ing all data available (Neubig and Hu, 2018). However, it remains a question whether a smart data selection strategy can further improve low-resource NMT with data from other auxiliary languages. In this paper, we seek to construct a sampling distribution over all multilingual data, so that it minimizes the training loss of the low-resource language. Based on this formulation, we propose and efficient algorithm, (TCS), which first samples a target sentence, and then conditionally samples its source sentence. Experiments show TCS brings significant gains of up to 2 BLEU improvements on three of four languages we test, with minimal training overhead. Authors: Xinyi Wang, Graham Neubig (Carnegie Mellon University)

Comments
loading...