An end-to-end approach for the verification problem: learning the right distance

ICML 2020

An end-to-end approach for the verification problem: learning the right distance

Jul 12, 2020
|
33 views
|
Details
In this contribution, we augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder. Several interpretations are thus drawn for the learned distance-like model's output. We first show it approximates a likelihood ratio which can be used for hypothesis tests, and that it further induces a large divergence across the joint distributions of pairs of examples from the same and from different classes. Evaluation is performed under the verification setting consisting of determining whether sets of examples belong to the same class, even if such classes are novel and were never presented to the model during training. Empirical evaluation shows such method defines an end-to-end approach for the verification problem, able to attain better performance than simple scorers such as those based on cosine similarity and further outperforming widely used downstream classifiers. We further observe training is much simplified under the proposed approach compared to metric learning with actual distances, requiring no complex scheme to harvest pairs of examples. Speakers: João Monteiro, Isabela Albuquerque, Jahangir Alam, R Devon Hjelm, Tiago H. Falk

Comments
loading...