P–nets: Deep Polynomial Neural Networks

CVPR 2020

Details
Authors: Grigorios G. Chrysos, Stylianos Moschoglou, Giorgos Bouritsas, Yannis Panagakis, Jiankang Deng, Stefanos Zafeiriou Description: Deep Convolutional Neural Networks (DCNNs) is currently the method of choice both for generative, as well as for discriminative learning in computer vision and machine learning. The success of DCNNs can be attributed to the careful selection of their building blocks (e.g., residual blocks, rectifiers, sophisticated normalization schemes, to mention but a few). In this paper, we propose $\PiNets, a new class of DCNNs. $\PiNets are polynomial neural networks, i.e., the output is a high-order polynomial of the input. $\PiNets can be implemented using special kind of skip connections and their parameters can be represented via high-order tensors. We empirically demonstrate that $\PiNets have better representation power than standard DCNNs and they even produce good results without the use of non-linear activation functions in a large battery of tasks and signals, i.e., images, graphs, and audio. When used in conjunction with activation functions, $\PiNets produce state-of-the-art results in challenging tasks, such as image generation. Lastly, our framework elucidates why recent generative models, such as StyleGAN, improve upon their predecessors, e.g., ProGAN.

Comments
loading...