AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning

NeurIPS 2020

AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning

Dec 06, 2020
|
27 views
|
|
Code
Details
Multi-task learning is an open and challenging problem in computer vision. The typical way of conducting multi-task learning with deep neural networks is either through handcrafting schemes that share all initial layers and branch out at an adhoc point or through using separate task-specific networks with an additional feature sharing/fusion mechanism. Unlike existing methods, we propose an adaptive sharing approach, called AdaShare, that decides what to share across which tasks for achieving the best recognition accuracy, while taking resource efficiency into account. Specifically, our main idea is to learn the sharing pattern through a task-specific policy that selectively chooses which layers to execute for a given task in the multi-task network. We efficiently optimize the task-specific policy jointly with the network weights using standard back-propagation. Experiments on three challenging and diverse benchmark datasets with a variable number of tasks well demonstrate the efficacy of our approach over state-of-the-art methods. Speakers: Ximeng Sun, Rameswar Panda, Rogerio Feris, Kate Saenko

Comments
loading...