Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses

ECCV 2020

Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses

Aug 25, 2020
|
28 views
|
Details
ECCV 2020 AROW Workshop Paper: Authors: Fu Lin, Rohit Mittapalli, Prithvijit Chattopadhyay, Daniel Bolya and Judy Hoffman Description: Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability. In this work, we investigate the potential effect defense techniques have on the geometry of the likelihood landscape - likelihood of the input images under the trained model. We first propose a way to visualize the likelihood landscape leveraging an energy-based model interpretation of discriminative classifiers. Then we introduce a measure to quantify the flatness of the likelihood landscape. We observe that a subset of adversarial defense techniques results in a similar effect of flattening the likelihood landscape. We further explore directly regularizing towards a flat landscape for adversarial robustness.

Comments
loading...