We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and predictions of a few-shot task, subject to supervision constraints from the support set. Furthermore, we propose a new alternating direction solver for our mutual-information loss, which substantially speeds up transductive-inference convergence over gradient-based optimization, while demonstrating similar accuracy performance. Following standard few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across all datasets and networks, while using simple cross-entropy training on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% to 5% improvement in accuracy over the best performing methods not only on all the well-established few-shot benchmarks, but also on more challenging scenarios, with domain shifts and larger number of classes.
Speakers: Malik Boudiaf, Imtiaz Masud Ziko, Jérôme Rony, Jose Dolz, Pablo Piantanida, Ismail Ben Ayed