Abstract: In this paper, we demonstrated a novel approach to calibrate the LiDAR and Stereo cameras using a deep neural network. Firstly, LiDAR point cloud is projected to the depth map in the left camera's view, and the depth map for the corresponding stereo pair is calculated as well. Next, two modality-specific feature extraction modules are used for two depth maps respectively to preprocess and extract features from each modality. Then, the output feature maps are concatenated and they are fed to the global regression module that will learn the geometry correspondencies among modalities. A SPP layer is used after the output from previous module to provide a fixed length feature vector. Finally, two sets of output layers are used for rotation errors and translation errors separately.
Authors: Shan Wu, Amnir Hadachi, Damien Vivet, Yadu Prabhakar (University of Tartu)