[AAAI 2021] Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control

AAAI 2021

[AAAI 2021] Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control

Jan 20, 2021
|
42 views
|
Details
Abstract: Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning. However, results have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations. To bridge this gap, we focus on one class of interactive tasks -- sitting onto a chair. We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task. We experimentally demonstrate the strength of our approach over different non-hierarchical and hierarchical baselines. We also show that our approach can be applied to motion prediction given an image input. Authors: Yu-Wei Chao, Jimei Yang, Weifeng Chen, Jia Deng (NVIDIA, Adobe Research, University of Michigan, Ann Arbor, Princeton University)

Comments
loading...