Parsing Speech: a Neural Approach to Integrating Lexical and Acoustic-Prosodic Information

ACL 2018

Parsing Speech: a Neural Approach to Integrating Lexical and Acoustic-Prosodic Information

Jan 21, 2021
|
26 views
|
|
Code
Details
Abstract: In conversational speech, the acoustic signal provides cues that help listeners disambiguate difficult parses. For automatically parsing spoken utterances, we introduce a model that integrates transcribed text and acoustic-prosodic features using a convolutional neural network over energy and pitch trajectories coupled with an attention-based recurrent neural network that accepts text and prosodic features. We find that different types of acoustic-prosodic features are individually helpful, and together give statistically significant improvements in parse and disfluency detection F1 scores over a strong text-only baseline. For this study with known sentence boundaries, error analyses show that the main benefit of acoustic-prosodic features is in sentences with disfluencies, attachment decisions are most improved, and transcription errors obscure gains from prosody. Authors: Trang Tran, Shubham Toshniwal, Mohit Bansal, Kevin Gimpel, Karen Livescu, Mari Ostendorf (University of Washington, Toyota Technological Institute at Chicago, UNC Chapel Hill)

Comments
loading...