CoRL 2020, Spotlight Talk 320: Learning rich touch representations through cross-modal self-supervision

CoRL 2020

CoRL 2020, Spotlight Talk 320: Learning rich touch representations through cross-modal self-supervision

Dec 18, 2020
|
26 views
|
|
Code
Details
"**Learning rich touch representations through cross-modal self-supervision** Martina Zambelli (DeepMind)*; Yusuf Aytar (DeepMind); Francesco Visin (Google DeepMind); Yuxiang Zhou (DeepMind); Raia Hadsell (Deepmind) Publication: http://corlconf.github.io/paper_320/ **Abstract** The sense of touch is fundamental in several manipulation tasks, but rarely used in robot manipulation. In this work we tackle the problem of learning rich touch features from cross-modal self-supervision. We evaluate them identifying objects and their properties in a few-shot classification setting. Two new datasets are introduced using a simulated anthropomorphic robotic hand equipped with tactile sensors on both synthetic and daily life objects. Several self-supervised learning methods are benchmarked on these datasets, by evaluating few-shot classification on unseen objects and poses. Our experiments indicate that cross-modal self-supervision effectively improves touch representation, and in turn has great potential to enhance robot manipulation skills.

Comments
loading...