[Spotlight at NeurIPS 2020] Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

NeurIPS 2020

Details
Author: Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh A deep reinforcement learning (DRL) agent observes its states through observations, whichmay contain natural measurement errors or adversarial noises. Since the observations deviatefrom the true states, they can mislead the agent into making suboptimal actions. Several workshave shown this vulnerability via adversarial attacks, but existing approaches on improving therobustness of DRL under this setting have limited success and lack for theoretical principles. Weshow that naively applying existing techniques on improving robustness for classification tasks,like adversarial training, are ineffective for many RL tasks. We propose the state-adversarialMarkov decision process (SA-MDP) to study the fundamental properties of this problem, anddevelop a theoretically principled policy regularization which can be applied to a large familyof DRL algorithms, including proximal policy optimization (PPO), deep deterministic policygradient (DDPG) and deep Q networks (DQN), for both discrete and continuous action controlproblems. We significantly improve the robustness of PPO, DDPG and DQN agents under asuite of strong white box adversarial attacks, including new attacks of our own. Additionally, wefind that a robust policy noticeably improves DRL performance even without an adversary in anumber of environments.

Comments
loading...