The talk was given at ICML 2020. Work done jointly with Aditi Raghunathan from Stanford university, Moksh Jain from NIT Karnataka and me, Harsha Vardhan Simhadri and Prateek Jain from Microsoft Research.
In this work we focus on the problem of one-class classification. In particular, we focus on standard unsupervised anomaly detection and one it’s variant which we call one class classification with limited negatives. For both the problems, we propose a new method that exploits low dimensional, locally linear and well sampled manifold assumption to generate informative adversarial examples. We call our method DROCC. DROCC works in standard anomaly detection settings, does not require additional assumptions and on a variety of domains like tabular data, time-series data, audio data, and image data, it achieves the state of the art accuracy. In this talk, we will first motivate DROCC by discussing the prior work and some of their challenges, then describe our method, and then conclude with a few empirical results.