Spotlight talk for "Form2Fit: Learning Shape Priors for Generalizable Assembly from Disassembly" which will appear at the IEEE International Conference on Robotics and Automation (ICRA) 2020.
Nominated for 'Best Paper Award in Automation'.
Project page: https://form2fit.github.io/
arXiv link: https://arxiv.org/abs/1910.13675
Blog post: https://ai.googleblog.com/2019/10/learning-to-assemble-and-to-generalize.html
Abstract: Is it possible to learn policies for robotic assembly that can generalize to new objects? We explore this idea in the context of the kit assembly task. Since classic methods rely heavily on object pose estimation, they often struggle to generalize to new objects without 3D CAD models or task- specific training data. In this work, we propose to formulate the kit assembly task as a shape matching problem, where the goal is to learn a shape descriptor that establishes geometric correspondences between object surfaces and their target placement locations from visual input. This formulation enables the model to acquire a broader understanding of how shapes and surfaces fit together for assembly โ allowing it to generalize to new objects and kits. To obtain training data for our model, we present a self-supervised data-collection pipeline that obtains ground truth object-to-placement correspondences by disassembling complete kits. Our resulting real-world system, Form2Fit, learns effective pick and place strategies for assembling objects into a variety of kits โ achieving 90% average success rates under different initial conditions (e.g. varying object and kit poses), 94% success under new configurations of multiple kits, and over 86% success with completely new objects and kits. Code, videos, and supplemental material are available at https://form2fit.github.io/