CoRL 2020, Spotlight Talk 256: Hardware as Policy: Mechanical and Computational Co-Optimization using Deep Reinforcement Learning

CoRL 2020

CoRL 2020, Spotlight Talk 256: Hardware as Policy: Mechanical and Computational Co-Optimization using Deep Reinforcement Learning

Dec 16, 2020
|
40 views
|
Details
"**Hardware as Policy: Mechanical and Computational Co-Optimization using Deep Reinforcement Learning** Tianjian Chen (Columbia University)*; Zhanpeng He (Columbia University); Matei Ciocarlie (Columbia) Publication: http://corlconf.github.io/paper_256/ **Abstract** Deep Reinforcement Learning (RL) has shown great success in learning complex control policies for a variety of applications in robotics. However, in most such cases, the hardware of the robot has been considered immutable, modeled as part of the environment. In this study, we explore the problem of learning hardware and control parameters together in a unified RL framework. To achieve this, we propose to model the robot body as a “hardware policy”, analogous to and optimized jointly with its computational counterpart. We show that, by modeling such hardware policies as auto-differentiable computational graphs, the ensuing optimization problem can be solved efficiently by gradient-based algorithms from the Policy Optimization family. We present two such design examples: a toy mass-spring problem, and a real-world problem of designing an underactuated hand. We compare our method against traditional co-optimization approaches, and also demonstrate its effectiveness by building a physical prototype based on the learned hardware parameters.

Comments
loading...