Extensions and Limitations of Randomized Smoothing for Robustness Guarantees - Crossminds
CrossMind.ai logo

Extensions and Limitations of Randomized Smoothing for Robustness Guarantees

Sep 29, 2020
|
34 views
|
Details
Authors: Jamie Hayes Description: Randomized smoothing, a method to certify a classifier's decision on an input is invariant under adversarial noise, offers attractive advantages over other certification methods. It operates in a black-box and so certification is not constrained by the size of the classifier's architecture. Here, we extend the work of Li et al. (2019), studying how the choice of divergence between smoothing measures affects the final robustness guarantee, and how the choice of smoothing measure itself can lead to guarantees in differing threat models. To this end, we develop a method to certify robustness against any Lp norm minimized adversarial perturbation. We then demonstrate a negative result, that randomized smoothing suffers from the curse of dimensionality; as p increases, the effective radius around an input one can certify vanishes.

Comments
loading...
Reactions (0) | Note
    📝 No reactions yet
    Be the first one to share your thoughts!
loading...
Recommended