Abstract: Advances in deep learning have led to remarkable success in augmented microscopy, enabling us to obtain high-quality microscope images without using expensive microscopy hardware and sample preparation techniques. Current deep learning models for augmented microscopy are mostly U-Net-based neural networks, thus sharing certain drawbacks that limit the performance. In particular, U-Nets are composed of local operators only and lack dynamic non-local information aggregation. In this work, we introduce global voxel transformer networks (GVTNets), a deep learning tool for augmented microscopy that overcomes intrinsic limitations of the current U-Net-based models and achieves improved performance. GVTNets are built on global voxel transformer operators, which are able to aggregate global information, as opposed to local operators like convolutions. We apply the proposed methods on existing datasets for three different augmented microscopy tasks under various settings.
Authors: Zhengyang Wang, Yaochen Xie & Shuiwang Ji (Texas A&M University)