No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling

ACL 2018

No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling

Jan 27, 2021
|
56 views
Details
Abstract: Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learn-ing methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demon-strations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Authors: Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang (University of California, Santa Barbara)

Comments
loading...