AudioCaps: Generating Captions for Audios in The Wild

ACL 2019

AudioCaps: Generating Captions for Audios in The Wild

Mar 24, 2021
|
34 views
Details
Abstract: We explore the problem of Audio Captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We contribute a large-scale dataset of 46K audio clips with human-written text pairs collected via crowdsourcing on the AudioSet dataset. Our thorough empirical studies not only show that our collected captions are indeed faithful to audio inputs but also discover what forms of audio representation and captioning models are effective for the audio captioning. From extensive experiments, we also propose two novel components that help improve audio captioning performance: the top-down multi-scale encoder and aligned semantic attention. Authors: Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, Gunhee Kim (Seoul National University)

Comments
loading...