Weakly-Supervised Spatio-Temporally Grounding Natural Sentence in Video

ACL 2019

Weakly-Supervised Spatio-Temporally Grounding Natural Sentence in Video

Jan 30, 2021
|
27 views
|
Details
Abstract: In this paper, we address a novel task, namely weakly-supervised spatio-temporally grounding natural sentence in video. Specifically, given a natural sentence and a video, we localize a spatio-temporal tube in the video that semantically corresponds to the given sentence, with no reliance on any spatio-temporal annotations during training. First, a set of spatio-temporal tubes, referred to as instances, are extracted from the video. We then encode these instances and the sentence using our newly proposed attentive interactor which can exploit their fine-grained relationships to characterize their matching behaviors. Besides a ranking loss, a novel diversity loss is introduced to train our attentive interactor to strengthen the matching behaviors of reliable instance-sentence pairs and penalize the unreliable ones. We also contribute a dataset, called VID-sentence, based on the ImageNet video object detection dataset, to serve as a benchmark for our task. Results from extensive experiments demonstrate the superiority of our model over the baseline approaches. Authors: Zhenfang Chen, Lin Ma, Wenhan Luo, Kwan-Yee Kenneth Wong (The University of Hong Kong, Tencent AI Lab)

Comments
loading...