Reducing Adversarially Robust Learning to Non-Robust PAC Learning

# Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Dec 06, 2020
|
36 views
|
###### Details
We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class $\mathcal{C}$ using any non-robust learner $\mathcal{A}$ for $\mathcal{C}$. The number of calls to $\mathcal{A}$ depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable. Speakers: Omar Montasser, Nati Srebro, Steve Hanneke