Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data

CVPR 2020

Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data

Sep 29, 2020
|
42 views
|
Details
Authors: Yen-Chang Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira Description: Deep neural networks have attained remarkable performance when applied to data that comes from the same distribution as that of the training set, but can significantly degrade otherwise. Therefore, detecting whether an example is out-of-distribution (OoD) is crucial to enable a system that can reject such samples or alert users. Recent works have made significant progress on OoD benchmarks consisting of small image datasets. However, many recent methods based on neural networks rely on training or tuning with both in-distribution and out-of-distribution data. The latter is generally hard to define a-priori, and its selection can easily bias the learning. We base our work on a popular method ODIN, proposing two strategies for freeing it from the needs of tuning with OoD data, while improving its OoD detection performance. We specifically propose to decompose confidence scoring as well as a modified input pre-processing method. We show that both of these significantly help in detection performance. Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference in the difficulty of the problem, providing an analysis of when ODIN-like strategies do or do not work.

Comments
loading...