Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles - Crossminds
CrossMind.ai logo

Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles

Sep 29, 2020
|
25 views
|
Details
Authors: Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A. K. Qin, Yun Yang Description: Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples created with large and less realistic distortions that are easily identified by human observers. In this paper, we propose a novel approach, called Adversarial Camouflage (\emph{AdvCam}), to craft and camouflage physical-world adversarial examples into natural styles that appear legitimate to human observers. Specifically, \emph{AdvCam} transfers large adversarial perturbations into customized styles, which are then hidden'' on-target object or off-target background. Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by \emph{AdvCam} are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers. Hence, \emph{AdvCam} is a flexible approach that can help craft stealthy attacks to evaluate the robustness of DNNs.

Comments
loading...
Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!
loading...
Recommended