Authors: Canjie Luo, Yuanzhi Zhu, Lianwen Jin, Yongpan Wang Description: Handwritten text and scene text suffer from various shapes and distorted patterns. Thus training a robust recognition model requires a large amount of data to cover diversity as much as possible. In contrast to data collection and annotation, data augmentation is a low cost way. In this paper, we propose a new method for text image augmentation. Different from traditional augmentation methods such as rotation, scaling and perspective transformation, our proposed augmentation method is designed to learn proper and efﬁcient data augmentation which is more effective and specific for training a robust recognizer. By using a set of custom ﬁducial points, the proposed augmentation method is ﬂexible and controllable. Furthermore, we bridge the gap between the isolated processes of data augmentation and network optimization by joint learning. An agent network learns from the output of the recognition network and controls the ﬁducial points to generate more proper training samples for the recognition network. Extensive experiments on various benchmarks, including regular scene text, irregular scene text and handwritten text, show that the proposed augmentation and the joint learning methods signiﬁcantly boost the performance of the recognition networks. A general toolkit for geometric augmentation is available.