A Bayesian Approach to Reinforcement Learning of Vision-Based Vehicular Control

ICPR 2020

Abstract: In this paper, we present a state-of-the-art reinforce- ment learning method for autonomous driving. Our approach employs temporal difference learning in a Bayesian framework to learn vehicle control signals from sensor data. The agent has access to images from a forward facing camera, which are pre- processed to generate semantic segmentation maps. We trained our system using both ground truth and estimated semantic segmentation input. Based on our observations from a large set of experiments, we conclude that training the system on ground truth input data leads to better performance than training the system on estimated input even if estimated input is used for evaluation. The system is trained and evaluated in a realistic simulated urban environment using the CARLA simulator. The simulator also contains a benchmark that allows for comparing to other systems and methods. The required training time of the system is shown to be lower and the performance on the benchmark superior to competing approaches. Authors: Zahra Gharaee, Karl Holmquist, Linbo He, Michael Felsberg (Linköping University)