Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models

Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models

May 27, 2021
|
43 views
|
Details
Abstract: Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots. At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans. In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts. We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry. Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set. By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor babbling dataset. Comprehensive evaluations show that our method enables accurate and diverse face mimicry across diverse human subjects. Authors: Boyuan Chen, Yuhang Hu, Lianfeng Li, Sara Cummings, Hod Lipson (Columbia University)

Comments
loading...