Complex Transformer: A Framework for Modeling Complex-Valued Sequence

NeurIPS 2019

Complex Transformer: A Framework for Modeling Complex-Valued Sequence

Dec 15, 2019
|
29 views
|
|
Code
Details
While deep learning has received a surge of interest in a variety of fields in recent years, major deep learning models barely use complex numbers. However, speech, signal and audio data are naturally complex-valued after Fourier Transform, and studies have shown a potentially richer representation of complex nets. In this paper, we propose a Complex Transformer, which incorporates the transformer model as a backbone for sequence modeling; we also develop attention and encoder-decoder network operating for complex input. The model achieves state-of-the-art performance on the MusicNet dataset and an In-phase Quadrature (IQ) signal dataset. Speakers: Martin Ma, Muqiao Yang

Comments
loading...