Authors: Sangmin Kim, Hyungjoon Nam, Jisu Kim, Jechang Jeong Description: Attentive neural networks for image restoration are in the spotlight because they got remarkable results both qualitatively and quantitatively. Networks attentive in RGB channels were effective in fields such as single image super-resolution and RAW to RGB mapping. In addition, networks attentive in positions of pixels were used in image denoising. However, networks attentive in positions of pixels, so called spatial attention or pixel attention algorithm, were not as effective in image restoration because the number of pixels in patches of an image is so many that the weights by sigmoid function are insignificant. Also, networks attentive in positions of pixels were mainly used in high-level vision such as image classification and image captioning where there is no need to restore an image itself. In this paper, we propose a demoiréing network attentive in channel, color, and concatenation, named C3Net. The proposed algorithm uses residual blocks attentive in RGB channels to take advantage of channel attention algorithm. In addition, we introduce a L1 color loss for demoiréing to solve moiré patterns caused by color-striped patterns. Also, we transferred multi-scale information by concatenation, not multiplying with the insignificant weights by sigmoid function. As a result, our proposed C3Net showed state-of-the-art results in the benchmark dataset on NTIRE 2020 demoiréing challenge.