Abstract: We propose a learning approach for mapping context-dependent sequential instructions to actions. We address the problem of discourse and state dependencies with an attention-based model that considers both the history of the in-teraction and the state of the world. To train from start and goal states without access to demonstrations, we propose SESTRA, a learning algorithm that takes advantage of single-step reward observations and immediate expected re-ward maximization. We evaluate on the SCONE domains, and show absolute accuracy improvements of 9.8%-25.3%across the domains over approaches that use high-level logical representations.
Authors: Alane Suhr, Yoav Artzi (Cornell University)