Authors: Vivek B.S., Ambareesh Revanur, Naveen Venkat, R. Venkatesh Babu Description: Adversarial Training (AT) is a straight forward solution to learn robust models by augmenting the training mini-batches with adversarial samples. Adversarial attack methods range from simple non-iterative (single-step) methods to computationally complex iterative (multi-step) methods. Although the single-step methods are efficient, the models trained using these methods merely appear to be robust, due to the masked gradients. In this work, we propose a novel regularizer named Plug-And-Pipeline (PAP) for single-step AT. The proposed regularizer attenuates the gradient masking effect by promoting the model to learn similar representations for both single-step and multi-step adversaries. Further, we present a novel pipelined approach that allows an efficient implementation of the proposed regularizer. Plug-And-Pipeline yields robustness comparable to multi-step AT methods, while requiring a low computational overhead, similar to that of single-step AT methods.