Authors: Mohammadreza Mostajabi, Ching Ming Wang, Darsh Ranjan, Gilbert Hsyu Description: Current automotive radars output sparse point clouds with very low angular resolution. Such output lacks semantic information of the environment and has prevented radars from providing reliable redundancy when combined with cameras. This paper introduces the first true imaging-radar dataset for a diverse urban driving environments, with resolution matching that of lidar. To illustrate the need of having high resolution semantic information in modern radar applications, we show an unsupervised pretraining algorithm for deep neural networks to detect moving vehicles in radar data with limited ground-truth labels. We envision that the details seen in this type of high-resolution radar image allow us to borrow from decades of computer vision research and develop radar applications that were not previously possible, such as mapping, localization and drivable area detection. This dataset is our first attempt to introduce such data to the vision community, and we will continue to provide datasets with improved features in the future.