Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models

NeurIPS 2020

Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models

Dec 06, 2020
|
36 views
|
Details
Undirected graphical models are compact representations of joint probability distributions over random variables. Given a distribution over inference tasks, graphical models of arbitrary topology can be trained using empirical risk minimization. However, when faced with new task distributions, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework for producing an ensemble of graphical models (AGMs). The ensemble is optimized to generate data, and inference is learned as a by-product of this endeavor. AGMs perform comparably with EGMs on inference tasks that the latter were specifically optimized for. Most importantly, AGMs show significantly better generalization capabilities across distributions of inference tasks. AGMs are also on par with GibbsNet, a state-of-the-art deep neural architecture, which like AGMs, allows conditioning on any subset of random variables. Finally, AGMs allow fast data sampling, competitive with Gibbs sampling from EGMs. Speakers: Adarsh K Jeewajee, Leslie Pack Kaelbling

Comments
loading...