CoRL 2020, Spotlight Talk 158: Assisted Perception: Optimizing Observations to Communicate State

CoRL 2020

CoRL 2020, Spotlight Talk 158: Assisted Perception: Optimizing Observations to Communicate State

Dec 16, 2020
|
21 views
|
|
Code
Details
"**Assisted Perception: Optimizing Observations to Communicate State** Siddharth Reddy (UC Berkeley)*; Sergey Levine (UC Berkeley); Anca Dragan (EECS Department, University of California, Berkeley) Publication: http://corlconf.github.io/paper_158/ **Abstract** We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairment, where user may have systematic biases that lead to suboptimal behavior: they might struggle to process observations from multiple sensors simultaneously, receive delayed observations, or underestimate distances to obstacles. While we cannot directly change the user's internal beliefs or their internal state estimation process, our insight is that we can still assist them by modifying the user's observations. Instead of showing the user their true observations, ***we synthesize new observations that lead to more accurate internal state estimates when processed by the user***. We refer to this method as assistive state estimation (ASE): an automated assistant uses the true observations to infer the state of the world, then generates a modified observation for the user to consume (e.g., through an augmented reality interface), and optimizes the modification to induce the user's new beliefs to match the assistant's current beliefs. To predict the effect of the modified observation on the user's beliefs, ASE learns a model of the user's state estimation process: after each task completion, it searches for a model that would have led to beliefs that explain the user's actions. We evaluate ASE in a user study with 12 participants who each perform four tasks: two tasks with known user biases - bandwidth-limited image classification and a driving video game with observation delay - and two with unknown biases that our method has to learn - guided 2D navigation and a lunar lander teleoperation video game. ASE's general-purpose approach to synthesizing informative observations enables a different assistance strategy to emerge in each domain, such as quickly revealing informative pixels to speed up image classification, using a dynamics model to undo observation delay in driving, identifying nearby landmarks for navigation, and exaggerating a visual indicator of tilt in the lander game. The results show that ASE substantially improves the task performance of users with bandwidth constraints, observation delay, and other unknown biases.

Comments
loading...