To be presented at the Africa-NLP Workshop, EACL 2021
The recent raft of high-profile gaffes involving neural machine translation technology has brought to light the unreliability and brittleness of this fledgling technology. These revelations have worryingly coincided with two other developments: The rise of back-translated text being increasingly used to augment training data in so termed low-resource natural language processing (NLP) scenarios and the emergence of 'AI-enhanced legal-tech' as a panacea that promises 'disruptive democratization' of access to legal services. In the backdrop of these quandaries, we present this cautionary tale where we shed light on the specifics of the risks surrounding cavalier deployment of this technology by exploring two specific failings: Androcentrism and Enantiosemy. In this regard, we empirically investigate the fate of the pronouns and a list of contronyms when subjected to back-translation using the state-of-the-art Google translate API. Through this, we seek to highlight the extent of prevalence of the defaulting-to-the-masculine phenomenon in the context of engendered profession-related translations and also empirically demonstrate the scale and nature of threats pertaining to contronymous phrases covering both current-affairs and legal issues.