DNDNet: Reconfiguring CNN for Adversarial Robustness - Crossminds
CrossMind.ai logo
Authors: Akhil Goel, Akshay Agarwal, Mayank Vatsa, Richa Singh, Nalini K. Ratha Description: Several successful adversarial attacks have demonstrated the vulnerabilities of deep learning algorithms. These attacks are detrimental in building deep learning based dependable AI applications. Therefore, it is imperative to build a defense mechanism to protect the integrity of deep learning models. In this paper, we present a novel defense layer'' in a network which aims to block the generation of adversarial noise and prevents an adversarial attack in black-box and gray-box settings. The parameter-free defense layer, when applied to any convolutional network, helps in achieving protection against attacks such as FGSM, $, Elastic-Net, and DeepFool. Experiments are performed with different CNN architectures, including VGG, ResNet, and DenseNet, on three databases, namely, MNIST, CIFAR-10, and PaSC. The results showcase the efficacy of the proposed defense layer without adding any computational overhead. For example, on the CIFAR-10 database, while the attack can reduce the accuracy of the ResNet-50 model to as low as .3$\%, the proposed defense layer'' retains the original accuracy of .32$\%.
Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!