Contributed talk: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

NeurIPS 2019

Contributed talk: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

Dec 14, 2019
|
37 views
|
Details
The problem of adversarial samples has been studied extensively for neural networks. However, for boosting, in particular boosted decision trees and decision stumps there are almost no results, even though boosted decision trees, as e.g. XGBoost, are quite popular due to their interpretability and good prediction performance. We show in this paper that for boosted decision stumps the exact min-max optimal robust loss and test error for an $l_\infty$-attack can be computed in $O(n\,T\log T)$, where $T$ is the number of decision stumps and $n$ the number of data points, as well as an optimal update of the ensemble in $O(n^2\,T\log T)$. While not exact, we show how to optimize an upper bound on the robust loss for boosted trees. Up to our knowledge, these are the first algorithms directly optimizing provable robustness guarantees in the area of boosting. We make the code of all our experiments publicly available at https://github.com/max-andr/provably-robust-boosting Speakers: Maksym Andriushchenko

Comments
loading...