Neural networks predictions are unreliable when the input sample is out of the training data distribution or corrupted by noise.
Being able to detect such failures automatically is fundamental to integrate deep learning algorithms into robotic systems.
Current approaches for uncertainty estimation of neural networks require changes to the network and optimization process, typically ignore prior knowledge about the data, and tend to make over-simplifying assumptions which underestimate uncertainty.
To address these limitations, we propose a novel framework for uncertainty estimation.
Based on Bayesian belief networks and Monte-Carlo sampling, our framework not only fully models the different sources of prediction uncertainty, but also incorporates prior data information, e.g. sensor noise.
We show theoretically that this gives us the ability to capture uncertainty better than existing methods.
In addition, our framework has several desirable properties: (i) it is agnostic to the network architecture and task; (ii) it does not require changes in the optimization process; (iii) it can be applied to already trained architectures.
We thoroughly validate the proposed framework through extensive experiments on both computer vision and control tasks, where we outperform previous methods by up to 23%.
A.Loquercio*, M.Segù*, D. Scaramuzza
Robotics and Automation Letters, 2020
Research Page: http://rpg.ifi.uzh.ch/research_learning.html
A. Loquercio, M. Segù, and D. Scaramuzza are with the Robotics and Perception Group, Dep. of Informatics, University of Zurich, and Dep. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland