Learning Canonical Transformations

NeurIPS 2020

Learning Canonical Transformations

Dec 06, 2020
|
32 views
|
Details
Humans understand a set of canonical geometric transformations (such as translation and rotation) that support generalization by being untethered to any specific object. We explore inductive biases that help a neural network model learn these transformations in pixel space in a way that can generalize out-of-domain. Specifically, we find that high training set diversity is sufficient for the extrapolation of translation to unseen shapes and scales, and that an iterative training scheme achieves significant extrapolation of rotation in time. Speakers: Zack Dulberg, Jonathan Cohen

Comments
loading...