GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs | CCS 2020

GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs | CCS 2020

Jan 01, 2021
|
73 views
|
|
Code
Details
In recent years, the success of deep learning has carried over from discriminative models to generative models. In particular, generative adversarial networks (GANs) have facilitated a new level of performance ranging from media manipulation to dataset re-generation. Despite the success, the potential risks of privacy breach stemming from GANs are less well explored. In this paper, we focus on membership inference attack against GANs that has the potential to reveal information about victim models' training data. Specifically, we present the first taxonomy of membership inference attacks, which encompasses not only existing attacks but also our novel ones. We also propose the first generic attack model that can be instantiated in various settings according to adversary's knowledge about the victim model. We complement our systematic analysis of attack vectors with a comprehensive experimental study, that investigates the effectiveness of these attacks w.r.t. model type, training configurations, and attack type across three diverse application scenarios ranging from images, over medical data to location data. We show consistent effectiveness in all the setups, which bridges the assumption gap and performance gap in previous study with a complete spectrum of performance across settings. We conclusively remind users to think over before publicizing any part of their models. Authors: Dingfan Chen, Ning Yu, Yang Zhang, Mario Fritz (CISPA Helmholtz Center for Information Security; University of Maryland, College Park; Max Planck Institute for Informatics)

Comments
loading...