Video Object Grounding Using Semantic Roles in Language Description

CVPR 2020

Video Object Grounding Using Semantic Roles in Language Description

Nov 12, 2020
|
44 views
|
|
Code
Details
Authors: Arka Sadhu, Kan Chen, Ram Nevatia Description: We explore the task of Video Object Grounding (VOG), which grounds objects in videos referred to in natural language descriptions. Previous methods apply image grounding based algorithms to address VOG, fail to explore the object relation information and suffer from limited generalization. Here, we investigate the role of object relations in VOG and propose a novel framework VOGNet to encode multi-modal object relations via self-attention with relative position encoding. To evaluate VOGNet, we propose novel contrasting sampling methods to generate more challenging grounding input samples, and construct a new dataset called ActivityNet-SRL (ASRL) based on existing caption and grounding datasets. Experiments on ASRL validate the need of encoding object relations in VOG, and our VOGNet outperforms competitive baselines by a significant margin.

Comments
loading...