Abstract: We present an efficient multi-sensor odometry system
for mobile platforms that jointly optimizes visual, lidar, and
inertial information within a single integrated factor graph. This
runs in real-time at full framerate using fixed lag smoothing.
To perform such tight integration, a new method to extract 3D
line and planar primitives from lidar point clouds is presented.
This approach overcomes the suboptimality of typical frame-to-
frame tracking methods by treating the primitives as landmarks
and tracking them over multiple scans. True integration of
lidar features with standard visual features and IMU is made
possible using a subtle passive synchronization of lidar and
camera frames. The lightweight formulation of the 3D features
allows for real-time execution on a single CPU. Our proposed
system has been tested on a variety of platforms and scenarios,
including underground exploration with a legged robot and
outdoor scanning with a dynamically moving handheld device,
for a total duration of 96 min and 2.4 km traveled distance. In
these test sequences, using only one exteroceptive sensor leads
to failure due to either underconstrained geometry (affecting
lidar) or textureless areas caused by aggressive lighting changes
(affecting vision). In these conditions, our factor graph naturally
uses the best information available from each sensor modality
without any hard switches.
Authors: David Wisth, Marco Camurri, Sandipan Das, Maurice Fallon (Oxford University)